Skip to main content
Glama

UK Property Intelligence

Server Details

UK property data MCP server. Wraps Land Registry, Rightmove, EPC, rental yields, stamp duty, and Companies House into 13 tools.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 13 of 13 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes. However, property_report aggregates data also available via property_comps, property_yield, and rental_analysis, which may cause confusion. Planning_search and ppd_transactions are clearly separate.

Naming Consistency5/5

All tool names use consistent snake_case with a domain prefix (company_, planning_, property_, rental_, rightmove_, stamp_). The naming is predictable and descriptive.

Tool Count5/5

13 tools cover a wide range of UK property data without being excessive. Each tool serves a specific function, and the count feels appropriate for the server's purpose.

Completeness4/5

The set covers company records, planning, transactions, EPC, rental market, Rightmove listings, and stamp duty. Missing tools for land registry title deeds or direct property valuation, but the core domains are well represented.

Available Tools

13 tools
company_profileAInspect

Get the full Companies House record for a company by number.

Returns registered address, status, incorporation date, officers, and filing history. Use company_search to find a company number by name.

ParametersJSON Schema
NameRequiredDescriptionDefault
company_numberYesCompanies House number (e.g. '00445790').

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must carry the full burden of behavioral transparency. It mentions the return fields but does not disclose any behavioral traits such as rate limits, authentication requirements, or whether the operation is read-only. This leaves the agent with uncertainty about side effects or restrictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, consisting of two sentences that efficiently convey the purpose and return data. Every sentence adds value, and the most critical information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of this tool (single parameter, no output schema), the description adequately covers what the tool does and what it returns. However, it could be more complete by specifying the structure of the returned data (e.g., format of officers/filing history) or any error conditions. It is nearly complete but leaves minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter 'company_number' is well-documented in the input schema with a description and example. The tool description does not add further semantic information beyond what the schema provides. Since schema description coverage is 100%, a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool retrieves the full Companies House record by company number, listing specific return fields (address, status, officers, filing history). It explicitly distinguishes from the sibling tool company_search, which is used to find a company number by name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says to use company_search to find a company number if needed, providing clear guidance on when to use this tool vs. its sibling. However, it does not mention any conditions where this tool should not be used or any prerequisites beyond having a company number.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ppd_transactionsAInspect

Search Land Registry transactions by postcode, address, date range, or price.

Use for specific property history ("what has 10 Downing Street sold for?") or filtered market queries ("all sales over 500k in SW1 last year").

ParametersJSON Schema
NameRequiredDescriptionDefault
paonNoPrimary address (house name/number) for address-based search
townNoTown name for address-based search
limitNoMax results to return (default 25)
streetNoStreet name for address-based search
to_dateNoEnd date filter (ISO format)
postcodeNoUK postcode (e.g. "SW1A 1AA") - required for postcode search
from_dateNoStart date filter (ISO format, e.g. "2023-01-01")
max_priceNoMaximum price filter in £
min_priceNoMinimum price filter in £
property_typeNoFilter by type: F=flat, D=detached, S=semi, T=terraced

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, and the description does not disclose behavioral traits like rate limits, data freshness, or whether it is read-only. It only states it searches, which is insufficient for full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with main action, concise and efficient. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has 10 parameters, no output schema, and no annotations. Description lacks information on return format, error behavior, or limitations, making it incomplete for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% parameter description coverage, so baseline is 3. Description adds context by grouping search categories (postcode, address, date range, price) but does not add significant semantic nuance beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches Land Registry transactions by various criteria, with specific verb+resource. Examples like 'what has 10 Downing Street sold for?' make purpose concrete and distinct from siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit examples of when to use: specific property history or filtered market queries. Does not mention when not to use or alternatives, but examples give clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

property_blocksAInspect

Find buildings with multiple flat sales — block buying opportunities.

Groups Land Registry transactions by building to identify blocks being sold off, investor exits, and bulk-buy opportunities.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of blocks to return (default 50)
monthsNoLookback period in months (default 24)
postcodeYesUK postcode (e.g. "B1 1AA")
search_levelNoSearch granularity — "postcode", "sector", or "district" (default "sector")sector
property_typeNoPPD property type filter — "F" for flats, None for all types (default "F")F
min_transactionsNoMinimum sales per building to qualify (default 2)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description explains the tool's core behavior (grouping transactions by building) and goals (identifying blocks), but since no annotations are provided, it should also disclose that it is read-only and any limitations (e.g., data source, update frequency). These are not mentioned, though the purpose is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with the core action and purpose. No redundant or vague phrasing; every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks details about the output format or structure (no output schema provided). While it covers the tool's high-level purpose, it does not explain what properties the returned blocks contain or how to interpret results, leaving some gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 6 parameters are described in the input schema (100% coverage), so the baseline is 3. The description does not add new meaning beyond the schema; it only provides a high-level context about block buying, not parameter-specific guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Find', 'Groups') and clearly identifies the resource ('buildings with multiple flat sales'). It highlights unique outcomes ('block buying opportunities', 'investor exits', 'bulk-buy opportunities') that distinguish it from sibling tools like ppd_transactions or property_comps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (e.g., identifying bulk-buy opportunities) but does not explicitly state when to use this tool versus alternatives like ppd_transactions or when not to use it. No direct guidance on prerequisites or exclusion conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

property_compsAInspect

Comparable property sales from Land Registry Price Paid Data.

Auto-escalates to wider search area if fewer than 5 results found. EPC enrichment adds floor area, price/sqft, and EPC rating to each comp, plus area-level median price/sqft and EPC match rate.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax transactions to return (default 30)
monthsNoLookback period in months (default 24)
addressNoOptional street address to identify subject property and show percentile rank
postcodeYesUK postcode (e.g. "SW1A 1AA", "NG11 9HD")
enrich_epcNoAdd floor area, price/sqft, and EPC rating to each comp (default true)
search_levelNoSearch area granularity - usually leave as defaultsector
auto_escalateNoWiden search area if fewer than 5 results (default true). Set false to keep results local — useful when district-level escalation would include irrelevant areas.
property_typeNoFilter by type: F=flat, D=detached, S=semi, T=terraced (default all)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses key behaviors: auto-escalation of search area and EPC enrichment including specific fields. It does not cover potential side effects or authentication, but the disclosed behaviors are important and accurate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, each adding distinct value. No redundant or vague language. It is front-loaded with the core purpose and efficiently adds key behavioral details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters and no output schema, the description provides a good overview and highlights critical behaviors (auto-escalation, enrichment). It could be more complete by describing the return format, but it sufficiently covers the tool's main functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds context for auto_escalate and enrich_epc beyond the schema, but for most parameters the description does not provide additional meaning beyond what the schema already specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides 'comparable property sales from Land Registry Price Paid Data,' specifying the data source and type of output. It also distinguishes itself from sibling tools like 'ppd_transactions' by focusing on comparables with enrichment features.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for finding comparable sales but lacks explicit guidance on when to use this tool versus alternatives like 'ppd_transactions' or 'rental_analysis'. No when-not or exclusion criteria are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

property_epcAInspect

EPC certificate data for a UK property or postcode area.

With address: returns the matched certificate for that property — energy rating, score, floor area, construction age, heating costs.

Without address: returns all certificates at the postcode with area-level aggregation (rating distribution, floor area range, property type breakdown). Use this for area analysis rather than a single-property lookup.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressNoStreet address for exact match (omit for area view)
postcodeYesUK postcode (e.g. "SW1A 1AA")

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the two modes and the types of data returned (energy rating, score, floor area, etc.). However, it does not mention any behavioral traits such as authentication requirements, data freshness, rate limits, or whether the data is cached or live. This leaves some gaps in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (under 80 words) and well-structured. It opens with a clear purpose, then uses a blank line to separate two succinct paragraphs for each mode. Every sentence is informative and earns its place. No unnecessary verbiage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is fairly complete. It covers both usage scenarios and the expected return data. However, it lacks mentions of error handling, pagination, or data limits. For a straightforward lookup, this is adequate but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for both parameters. The description adds significant value by explaining the two usage modes (with/without address) and what each mode returns, which goes beyond the bare schema descriptions. It clarifies that address is optional and how switching modes changes behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: returning EPC certificate data for a UK property or postcode area. It distinguishes between two distinct modes (with address for single property, without for area analysis) and lists specific data fields returned. This specificity differentiates it from sibling tools like property_blocks or property_comps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use the tool (for property or postcode area) and explains the difference between using an address versus not. It advises using the without-address mode for area analysis. However, it does not explicitly state when not to use this tool or mention alternative siblings for related property data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

property_reportAInspect

Full data pull for a UK property in one call.

Returns sale history, area comps, EPC rating, rental market listings, current sales market listings, rental yield calculation, and price range from area median.

Requires a street address + postcode for subject property identification. Postcode-only (e.g. "NG1 2NS") returns area-level data without a subject property — use property_comps or property_yield for postcode-only queries.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressYesStreet address with postcode, e.g. "10 Downing Street, SW1A 2AA"
ppd_monthsNoLookback period for comparable sales (default 24)
property_typeNoFilter comparable sales by type: F=flat, D=detached, S=semi, T=terraced (default all)
search_radiusNoRadius in miles for Rightmove searches (default 0.5)
include_rentalsNoInclude Rightmove rental market analysis (default true)
include_sales_marketNoInclude Rightmove sales market (default true)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the transparency burden. It discloses the required input (address+postcode), the behavioral difference of postcode-only mode, and the output contents. It does not discuss side effects (expected for a read operation) or any potential restrictions, but the core behavior is well explained.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: two short paragraphs front-loaded with the tool's purpose, followed by usage guidance. Every sentence contributes meaningful information with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters, no output schema, and no annotations, the description effectively communicates the tool's scope and expected return data. It could briefly mention the output format (e.g., JSON), but the provided information is largely sufficient for an agent to decide and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all parameters with descriptions (100% coverage). The description adds valuable context about the address parameter's dual behavior (full report vs area-level) and reiterates the role of parameters like ppd_months without repeating schema details, thus enhancing understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a 'Full data pull for a UK property in one call' and enumerates the data categories returned, establishing a specific verb+resource combo. It distinguishes from sibling tools like property_comps and property_yield by noting when those should be used instead.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool (full data pull with address+postcode) and when not to (postcode-only queries should use property_comps or property_yield), providing clear alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

property_yieldBInspect

Calculate rental yield for a UK postcode.

Combines Land Registry sales data with Rightmove rental listings to produce a gross yield figure.

ParametersJSON Schema
NameRequiredDescriptionDefault
monthsNoSales lookback period in months (default 24)
radiusNoRental search radius in miles (default 0.5)
postcodeYesUK postcode (e.g. "NG11", "SW1A 1AA")
search_levelNo"sector" (recommended), "district", or "postcode"sector
property_typeNoFilter comparable sales by type: F=flat, D=detached, S=semi, T=terraced (default all)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description doesn't disclose any behavioral traits such as data freshness, API costs, or side effects. It only states the data sources used.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose, no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks details about the output format, error handling, or geographic scope beyond 'UK postcode'. With no output schema, the description could be more informative.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with adequate parameter explanations. The description adds no additional insight beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates rental yield for a UK postcode, distinguishing it from sibling tools like rental_analysis or property_comps, but doesn't explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like rental_analysis or property_comps. Only implicit that it's for yield calculation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rental_analysisAInspect

Rental market analysis for a UK postcode.

Returns median/average rent, listing count, and rent range. Optionally calculates gross yield from a given purchase price. Auto-escalates search radius if local listings are sparse (thin market).

ParametersJSON Schema
NameRequiredDescriptionDefault
radiusNoSearch radius in miles (default 0.5)
postcodeYesUK postcode (e.g. "NG1 1AA")
auto_escalateNoWiden radius if fewer than 3 listings found (default true)
building_typeNoFilter by building type: F=flat, D=detached, S=semi, T=terraced (default all)
purchase_priceNoOptional purchase price to calculate gross yield

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It discloses auto-escalation behavior and optional yield calculation. However, it does not explicitly state that this is a read-only operation or discuss data persistence, rate limits, or error handling. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences, each adding specific information: purpose, return data, optional yield, auto-escalation. No redundancy, front-loaded with the main verb. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters, no output schema, and no annotations, the description covers the key aspects: inputs, outputs, and notable behaviors. It could mention the structure of results (e.g., data types) or error scenarios, but it is sufficient for basic usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions, so baseline is 3. The description adds value by explaining the auto_escalate behavior and purchase_price usage for yield calculation, going beyond the schema's literal descriptions. This provides extra decision context for the agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb+resource: 'Rental market analysis for a UK postcode' and specifies outputs (median/average rent, listing count, rent range). Does not explicitly differentiate from sibling tools like property_yield or rightmove_search, but the purpose is well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage context (rental market analysis) but no explicit guidance on when to use this tool versus alternatives like property_yield or rightmove_search. The description does not state when not to use it or mention prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rightmove_listingAInspect

Fetch full details for a Rightmove listing by ID or URL.

Returns price, tenure, lease years remaining, service charge, ground rent, council tax band, floor area, key features, nearest stations, and floorplan URLs. Set include_images=True to also fetch and return image URLs (use rightmove_listing without include_images for basic detail, check image_urls field for photos).

ParametersJSON Schema
NameRequiredDescriptionDefault
max_imagesNoMax image URLs to include when include_images=True (default 8)
property_idYesRightmove property URL (e.g. "https://www.rightmove.co.uk/properties/12345678") or numeric ID (e.g. "12345678")
include_imagesNoInclude image URLs in the response (default False)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It discloses token usage for image fetching (37k tokens for 8 images), which is a key behavioral trait. However, it omits details like rate limits or whether it modifies data (though read-only is implied).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, front-loaded with purpose, then payload, then token warning. No redundant information; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description adequately covers purpose, returns, and a key behavioral aspect (token usage). It lacks details on error handling or authentication, but for a data-fetching tool this is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds value by explaining that property_id accepts URL or numeric ID, and warns about token cost for include_images. This supplements the schema descriptions effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches full details for a Rightmove listing by ID or URL, listing specific data fields returns. It distinguishes from sibling tools like 'rightmove_search' (which searches listings) and other property tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for retrieving full details of a single listing, but does not explicitly state when not to use it or recommend alternatives among sibling tools. No exclusion or context cues are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stamp_dutyBInspect

Calculate UK Stamp Duty Land Tax (SDLT) for a residential property.

ParametersJSON Schema
NameRequiredDescriptionDefault
priceYesPurchase price in £
non_residentNoTrue if buyer not UK resident (+2% surcharge)
first_time_buyerNoTrue for first-time buyer relief (up to £300k nil rate)
additional_propertyNoTrue if buying additional property (+5% surcharge)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full burden. It only says 'calculate' without disclosing any behavioral traits such as side effects, external calls, or reliance on tax year data. The safety profile is assumed but not stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that is front-loaded with the core purpose. Every word is informative and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is reasonably complete for a simple calculator with no output schema, but it lacks specification of the return value format. It also does not clarify that only residential properties are handled, leaving ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter having a clear description explaining its effect (e.g., surcharge percentages). The description adds no additional meaning beyond what is already in the schema, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates UK Stamp Duty Land Tax for residential properties, using a specific verb and resource. It distinguishes itself from sibling tools which deal with property data or company info, not tax calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. It only states what it does without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources