UK Property Intelligence
Server Details
UK property data MCP server. Wraps Land Registry, Rightmove, EPC, rental yields, stamp duty, and Companies House into 13 tools.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 13 of 13 tools scored. Lowest: 3.1/5.
Most tools have distinct purposes. However, property_report aggregates data also available via property_comps, property_yield, and rental_analysis, which may cause confusion. Planning_search and ppd_transactions are clearly separate.
All tool names use consistent snake_case with a domain prefix (company_, planning_, property_, rental_, rightmove_, stamp_). The naming is predictable and descriptive.
13 tools cover a wide range of UK property data without being excessive. Each tool serves a specific function, and the count feels appropriate for the server's purpose.
The set covers company records, planning, transactions, EPC, rental market, Rightmove listings, and stamp duty. Missing tools for land registry title deeds or direct property valuation, but the core domains are well represented.
Available Tools
13 toolscompany_profileAInspect
Get the full Companies House record for a company by number.
Returns registered address, status, incorporation date, officers, and filing history. Use company_search to find a company number by name.
| Name | Required | Description | Default |
|---|---|---|---|
| company_number | Yes | Companies House number (e.g. '00445790'). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden of behavioral transparency. It mentions the return fields but does not disclose any behavioral traits such as rate limits, authentication requirements, or whether the operation is read-only. This leaves the agent with uncertainty about side effects or restrictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, consisting of two sentences that efficiently convey the purpose and return data. Every sentence adds value, and the most critical information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of this tool (single parameter, no output schema), the description adequately covers what the tool does and what it returns. However, it could be more complete by specifying the structure of the returned data (e.g., format of officers/filing history) or any error conditions. It is nearly complete but leaves minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'company_number' is well-documented in the input schema with a description and example. The tool description does not add further semantic information beyond what the schema provides. Since schema description coverage is 100%, a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool retrieves the full Companies House record by company number, listing specific return fields (address, status, officers, filing history). It explicitly distinguishes from the sibling tool company_search, which is used to find a company number by name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says to use company_search to find a company number if needed, providing clear guidance on when to use this tool vs. its sibling. However, it does not mention any conditions where this tool should not be used or any prerequisites beyond having a company number.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_searchAInspect
Search Companies House by company name. Returns a list of matches.
For a direct lookup by company number, use company_profile(company_number="00445790").
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Company name to search (e.g. "Tesco", "Rightmove plc") |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states it searches and returns a list, which is accurate but lacks additional context like pagination or result limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. First sentence states purpose, second gives alternative. Well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with one parameter and no annotations, description covers purpose and usage adequately. Lacks return format but acceptable without output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes the single parameter with examples (100% coverage). Description adds no extra information beyond what schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Search', resource 'Companies House', and target 'company name'. Distinguishes from sibling 'company_profile' which uses direct lookup by number.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool (search by name) and when to use alternative (company_profile for direct number lookup).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
planning_searchAInspect
Find the planning portal URL for a UK postcode.
Returns the council name, planning system type, and a direct URL to open in a browser. Does NOT return planning application data — scraping is blocked by council portals. Use the returned search_urls.direct_search link to browse applications manually.
| Name | Required | Description | Default |
|---|---|---|---|
| postcode | Yes | UK postcode (e.g. "S1 1AA", "SW1A 2AA") |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden. It transparently discloses that the tool only returns a URL, not actual planning data, and explains why scraping is not possible. This is sufficient for a tool of this simplicity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded: first sentence states purpose, second lists return values, third clarifies limitations, fourth gives usage guidance. Every sentence is necessary and contributes to understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having no output schema, the description fully explains the return shape (council name, system type, direct URL) and covers usage limitations. No additional information is needed for an agent to correctly invoke and use this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter (postcode) is already well described in the schema (100% coverage). The description restates 'UK postcode' but adds no new semantic detail beyond the schema, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it finds the planning portal URL for a UK postcode, specifies what it returns, and distinguishes itself by stating what it does NOT do (scraping planning data). This sets it apart from sibling tools like property_search or company_profile which handle different data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives clear guidance: use it to get a direct URL for manual browsing, and explicitly states it does NOT return planning application data due to council portal blocks. While it doesn't name alternative tools, the context implies other tools handle different data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ppd_transactionsAInspect
Search Land Registry transactions by postcode, address, date range, or price.
Use for specific property history ("what has 10 Downing Street sold for?") or filtered market queries ("all sales over 500k in SW1 last year").
| Name | Required | Description | Default |
|---|---|---|---|
| paon | No | Primary address (house name/number) for address-based search | |
| town | No | Town name for address-based search | |
| limit | No | Max results to return (default 25) | |
| street | No | Street name for address-based search | |
| to_date | No | End date filter (ISO format) | |
| postcode | No | UK postcode (e.g. "SW1A 1AA") - required for postcode search | |
| from_date | No | Start date filter (ISO format, e.g. "2023-01-01") | |
| max_price | No | Maximum price filter in £ | |
| min_price | No | Minimum price filter in £ | |
| property_type | No | Filter by type: F=flat, D=detached, S=semi, T=terraced |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, and the description does not disclose behavioral traits like rate limits, data freshness, or whether it is read-only. It only states it searches, which is insufficient for full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with main action, concise and efficient. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 10 parameters, no output schema, and no annotations. Description lacks information on return format, error behavior, or limitations, making it incomplete for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% parameter description coverage, so baseline is 3. Description adds context by grouping search categories (postcode, address, date range, price) but does not add significant semantic nuance beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches Land Registry transactions by various criteria, with specific verb+resource. Examples like 'what has 10 Downing Street sold for?' make purpose concrete and distinct from siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit examples of when to use: specific property history or filtered market queries. Does not mention when not to use or alternatives, but examples give clear context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
property_blocksAInspect
Find buildings with multiple flat sales — block buying opportunities.
Groups Land Registry transactions by building to identify blocks being sold off, investor exits, and bulk-buy opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of blocks to return (default 50) | |
| months | No | Lookback period in months (default 24) | |
| postcode | Yes | UK postcode (e.g. "B1 1AA") | |
| search_level | No | Search granularity — "postcode", "sector", or "district" (default "sector") | sector |
| property_type | No | PPD property type filter — "F" for flats, None for all types (default "F") | F |
| min_transactions | No | Minimum sales per building to qualify (default 2) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains the tool's core behavior (grouping transactions by building) and goals (identifying blocks), but since no annotations are provided, it should also disclose that it is read-only and any limitations (e.g., data source, update frequency). These are not mentioned, though the purpose is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with the core action and purpose. No redundant or vague phrasing; every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lacks details about the output format or structure (no output schema provided). While it covers the tool's high-level purpose, it does not explain what properties the returned blocks contain or how to interpret results, leaving some gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All 6 parameters are described in the input schema (100% coverage), so the baseline is 3. The description does not add new meaning beyond the schema; it only provides a high-level context about block buying, not parameter-specific guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Find', 'Groups') and clearly identifies the resource ('buildings with multiple flat sales'). It highlights unique outcomes ('block buying opportunities', 'investor exits', 'bulk-buy opportunities') that distinguish it from sibling tools like ppd_transactions or property_comps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (e.g., identifying bulk-buy opportunities) but does not explicitly state when to use this tool versus alternatives like ppd_transactions or when not to use it. No direct guidance on prerequisites or exclusion conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
property_compsAInspect
Comparable property sales from Land Registry Price Paid Data.
Auto-escalates to wider search area if fewer than 5 results found. EPC enrichment adds floor area, price/sqft, and EPC rating to each comp, plus area-level median price/sqft and EPC match rate.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max transactions to return (default 30) | |
| months | No | Lookback period in months (default 24) | |
| address | No | Optional street address to identify subject property and show percentile rank | |
| postcode | Yes | UK postcode (e.g. "SW1A 1AA", "NG11 9HD") | |
| enrich_epc | No | Add floor area, price/sqft, and EPC rating to each comp (default true) | |
| search_level | No | Search area granularity - usually leave as default | sector |
| auto_escalate | No | Widen search area if fewer than 5 results (default true). Set false to keep results local — useful when district-level escalation would include irrelevant areas. | |
| property_type | No | Filter by type: F=flat, D=detached, S=semi, T=terraced (default all) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses key behaviors: auto-escalation of search area and EPC enrichment including specific fields. It does not cover potential side effects or authentication, but the disclosed behaviors are important and accurate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, each adding distinct value. No redundant or vague language. It is front-loaded with the core purpose and efficiently adds key behavioral details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters and no output schema, the description provides a good overview and highlights critical behaviors (auto-escalation, enrichment). It could be more complete by describing the return format, but it sufficiently covers the tool's main functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context for auto_escalate and enrich_epc beyond the schema, but for most parameters the description does not provide additional meaning beyond what the schema already specifies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides 'comparable property sales from Land Registry Price Paid Data,' specifying the data source and type of output. It also distinguishes itself from sibling tools like 'ppd_transactions' by focusing on comparables with enrichment features.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding comparable sales but lacks explicit guidance on when to use this tool versus alternatives like 'ppd_transactions' or 'rental_analysis'. No when-not or exclusion criteria are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
property_epcAInspect
EPC certificate data for a UK property or postcode area.
With address: returns the matched certificate for that property — energy rating, score, floor area, construction age, heating costs.
Without address: returns all certificates at the postcode with area-level aggregation (rating distribution, floor area range, property type breakdown). Use this for area analysis rather than a single-property lookup.
| Name | Required | Description | Default |
|---|---|---|---|
| address | No | Street address for exact match (omit for area view) | |
| postcode | Yes | UK postcode (e.g. "SW1A 1AA") |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the two modes and the types of data returned (energy rating, score, floor area, etc.). However, it does not mention any behavioral traits such as authentication requirements, data freshness, rate limits, or whether the data is cached or live. This leaves some gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (under 80 words) and well-structured. It opens with a clear purpose, then uses a blank line to separate two succinct paragraphs for each mode. Every sentence is informative and earns its place. No unnecessary verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is fairly complete. It covers both usage scenarios and the expected return data. However, it lacks mentions of error handling, pagination, or data limits. For a straightforward lookup, this is adequate but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for both parameters. The description adds significant value by explaining the two usage modes (with/without address) and what each mode returns, which goes beyond the bare schema descriptions. It clarifies that address is optional and how switching modes changes behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: returning EPC certificate data for a UK property or postcode area. It distinguishes between two distinct modes (with address for single property, without for area analysis) and lists specific data fields returned. This specificity differentiates it from sibling tools like property_blocks or property_comps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use the tool (for property or postcode area) and explains the difference between using an address versus not. It advises using the without-address mode for area analysis. However, it does not explicitly state when not to use this tool or mention alternative siblings for related property data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
property_reportAInspect
Full data pull for a UK property in one call.
Returns sale history, area comps, EPC rating, rental market listings, current sales market listings, rental yield calculation, and price range from area median.
Requires a street address + postcode for subject property identification. Postcode-only (e.g. "NG1 2NS") returns area-level data without a subject property — use property_comps or property_yield for postcode-only queries.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes | Street address with postcode, e.g. "10 Downing Street, SW1A 2AA" | |
| ppd_months | No | Lookback period for comparable sales (default 24) | |
| property_type | No | Filter comparable sales by type: F=flat, D=detached, S=semi, T=terraced (default all) | |
| search_radius | No | Radius in miles for Rightmove searches (default 0.5) | |
| include_rentals | No | Include Rightmove rental market analysis (default true) | |
| include_sales_market | No | Include Rightmove sales market (default true) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the transparency burden. It discloses the required input (address+postcode), the behavioral difference of postcode-only mode, and the output contents. It does not discuss side effects (expected for a read operation) or any potential restrictions, but the core behavior is well explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: two short paragraphs front-loaded with the tool's purpose, followed by usage guidance. Every sentence contributes meaningful information with no redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters, no output schema, and no annotations, the description effectively communicates the tool's scope and expected return data. It could briefly mention the output format (e.g., JSON), but the provided information is largely sufficient for an agent to decide and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers all parameters with descriptions (100% coverage). The description adds valuable context about the address parameter's dual behavior (full report vs area-level) and reiterates the role of parameters like ppd_months without repeating schema details, thus enhancing understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a 'Full data pull for a UK property in one call' and enumerates the data categories returned, establishing a specific verb+resource combo. It distinguishes from sibling tools like property_comps and property_yield by noting when those should be used instead.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool (full data pull with address+postcode) and when not to (postcode-only queries should use property_comps or property_yield), providing clear alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
property_yieldBInspect
Calculate rental yield for a UK postcode.
Combines Land Registry sales data with Rightmove rental listings to produce a gross yield figure.
| Name | Required | Description | Default |
|---|---|---|---|
| months | No | Sales lookback period in months (default 24) | |
| radius | No | Rental search radius in miles (default 0.5) | |
| postcode | Yes | UK postcode (e.g. "NG11", "SW1A 1AA") | |
| search_level | No | "sector" (recommended), "district", or "postcode" | sector |
| property_type | No | Filter comparable sales by type: F=flat, D=detached, S=semi, T=terraced (default all) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description doesn't disclose any behavioral traits such as data freshness, API costs, or side effects. It only states the data sources used.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core purpose, no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks details about the output format, error handling, or geographic scope beyond 'UK postcode'. With no output schema, the description could be more informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage with adequate parameter explanations. The description adds no additional insight beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it calculates rental yield for a UK postcode, distinguishing it from sibling tools like rental_analysis or property_comps, but doesn't explicitly contrast with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like rental_analysis or property_comps. Only implicit that it's for yield calculation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rental_analysisAInspect
Rental market analysis for a UK postcode.
Returns median/average rent, listing count, and rent range. Optionally calculates gross yield from a given purchase price. Auto-escalates search radius if local listings are sparse (thin market).
| Name | Required | Description | Default |
|---|---|---|---|
| radius | No | Search radius in miles (default 0.5) | |
| postcode | Yes | UK postcode (e.g. "NG1 1AA") | |
| auto_escalate | No | Widen radius if fewer than 3 listings found (default true) | |
| building_type | No | Filter by building type: F=flat, D=detached, S=semi, T=terraced (default all) | |
| purchase_price | No | Optional purchase price to calculate gross yield |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It discloses auto-escalation behavior and optional yield calculation. However, it does not explicitly state that this is a read-only operation or discuss data persistence, rate limits, or error handling. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences, each adding specific information: purpose, return data, optional yield, auto-escalation. No redundancy, front-loaded with the main verb. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters, no output schema, and no annotations, the description covers the key aspects: inputs, outputs, and notable behaviors. It could mention the structure of results (e.g., data types) or error scenarios, but it is sufficient for basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions, so baseline is 3. The description adds value by explaining the auto_escalate behavior and purchase_price usage for yield calculation, going beyond the schema's literal descriptions. This provides extra decision context for the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb+resource: 'Rental market analysis for a UK postcode' and specifies outputs (median/average rent, listing count, rent range). Does not explicitly differentiate from sibling tools like property_yield or rightmove_search, but the purpose is well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implied usage context (rental market analysis) but no explicit guidance on when to use this tool versus alternatives like property_yield or rightmove_search. The description does not state when not to use it or mention prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rightmove_listingAInspect
Fetch full details for a Rightmove listing by ID or URL.
Returns price, tenure, lease years remaining, service charge, ground rent, council tax band, floor area, key features, nearest stations, and floorplan URLs. Set include_images=True to also fetch and return image URLs (use rightmove_listing without include_images for basic detail, check image_urls field for photos).
| Name | Required | Description | Default |
|---|---|---|---|
| max_images | No | Max image URLs to include when include_images=True (default 8) | |
| property_id | Yes | Rightmove property URL (e.g. "https://www.rightmove.co.uk/properties/12345678") or numeric ID (e.g. "12345678") | |
| include_images | No | Include image URLs in the response (default False) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full burden. It discloses token usage for image fetching (37k tokens for 8 images), which is a key behavioral trait. However, it omits details like rate limits or whether it modifies data (though read-only is implied).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, front-loaded with purpose, then payload, then token warning. No redundant information; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description adequately covers purpose, returns, and a key behavioral aspect (token usage). It lacks details on error handling or authentication, but for a data-fetching tool this is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining that property_id accepts URL or numeric ID, and warns about token cost for include_images. This supplements the schema descriptions effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches full details for a Rightmove listing by ID or URL, listing specific data fields returns. It distinguishes from sibling tools like 'rightmove_search' (which searches listings) and other property tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for retrieving full details of a single listing, but does not explicitly state when not to use it or recommend alternatives among sibling tools. No exclusion or context cues are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rightmove_searchBInspect
Search Rightmove property listings for sale or rent near a postcode.
Returns prices, addresses, bedrooms, agent details, and listing URLs.
| Name | Required | Description | Default |
|---|---|---|---|
| radius | No | Search radius in miles (default varies by area) | |
| sort_by | No | Sort order: "newest", "oldest", "price_low", "price_high", "most_reduced" (default: Rightmove default) | |
| postcode | Yes | UK postcode (e.g. "NG1 1AA", "SW1A 2AA") | |
| max_pages | No | Max pages to fetch (default 1, ~25 listings per page) | |
| max_price | No | Maximum price/rent filter in £ | |
| min_price | No | Minimum price/rent filter in £ | |
| max_bedrooms | No | Maximum bedrooms filter | |
| min_bedrooms | No | Minimum bedrooms filter | |
| building_type | No | Filter by building type: F=flat, D=detached, S=semi, T=terraced (default all) | |
| property_type | No | "sale" or "rent" (default "sale") | sale |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It mentions return fields but fails to disclose behavioral traits like rate limits, data freshness, or that it fetches data from Rightmove in real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff, front-loaded with action and resource. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 10 parameters and no output schema, the description is too brief. It omits default behavior (sale, 1 page), pagination, and radius details, leaving gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. The description adds context about output fields (prices, addresses) but does not elaborate on parameters beyond the schema. Acceptable but not additive.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches Rightmove property listings for sale or rent near a postcode, with specific verb and resource. It distinguishes from sibling tools like company_search or planning_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives provided. The description implies it is for general property search, but does not differentiate from related tools like rightmove_listing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stamp_dutyBInspect
Calculate UK Stamp Duty Land Tax (SDLT) for a residential property.
| Name | Required | Description | Default |
|---|---|---|---|
| price | Yes | Purchase price in £ | |
| non_resident | No | True if buyer not UK resident (+2% surcharge) | |
| first_time_buyer | No | True for first-time buyer relief (up to £300k nil rate) | |
| additional_property | No | True if buying additional property (+5% surcharge) |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry full burden. It only says 'calculate' without disclosing any behavioral traits such as side effects, external calls, or reliance on tax year data. The safety profile is assumed but not stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that is front-loaded with the core purpose. Every word is informative and there is no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is reasonably complete for a simple calculator with no output schema, but it lacks specification of the return value format. It also does not clarify that only residential properties are handled, leaving ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with each parameter having a clear description explaining its effect (e.g., surcharge percentages). The description adds no additional meaning beyond what is already in the schema, meeting the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool calculates UK Stamp Duty Land Tax for residential properties, using a specific verb and resource. It distinguishes itself from sibling tools which deal with property data or company info, not tax calculations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. It only states what it does without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!