Skip to main content
Glama

Server Details

UK government data intelligence platform. 14 enriched endpoints with proprietary scoring across company, location, property, environmental, market, trade, education, transport, vehicle, health, energy, legal, and procurement data from 400+ official UK sources.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 25 of 25 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct domain (compliance, connectivity, demographics, directors, etc.) with clear descriptions and explicit cross-references to related tools, eliminating ambiguity.

Naming Consistency4/5

Most tools follow the 'uk_*_intelligence' pattern, but 'uk_stamp_duty_calculator', 'uk_vat_validation', and 'uk_vehicle_intelligence' deviate slightly, though they maintain the 'uk_' prefix and descriptive structure.

Tool Count4/5

With 25 tools covering a broad range of UK data topics, the count is high but justified by the API's comprehensive scope. Could be streamlined slightly, but still reasonable.

Completeness4/5

The tool set is extensive, covering most common UK data needs (company, property, planning, environment, demographics, etc.), though a dedicated crime tool is absent (crime data is bundled in location_intelligence).

Available Tools

25 tools
uk_compliance_intelligenceA
Read-onlyIdempotent
Inspect

Assess a UK company's regulatory compliance posture across multiple domains: ICO data protection registration, gender pay gap reporting, modern slavery statements, HSE enforcement notices, environmental permits, and gambling regulation. Returns a Compliance Score (0-100) with EXCELLENT/GOOD/ADEQUATE/CONCERNING/POOR rating and per-domain signals. Use this for pre-acquisition due diligence, supplier compliance checks, or ESG assessments. Companies below regulatory thresholds (e.g., <250 employees for gender pay gap) are scored neutrally, not penalised. For financial risk assessment, use uk_entity_intelligence instead. For director-level risk, use uk_director_intelligence. Sources: ICO, Gender Pay Gap Service, Modern Slavery Registry, HSE, Environment Agency, Gambling Commission.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoDetail level. "summary" = ICO + score only (5 credits). "standard" = adds GPG, modern slavery, HSE (15 credits). "full" = adds environmental permits, gambling regulation (30 credits). Default: "standard".
identifierYesCompany number (e.g., "00445790") or company name (e.g., "Tesco"). Company numbers give the most precise results.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent, non-destructive behavior. The description adds valuable context: companies below thresholds are scored neutrally (not penalised), and lists all data sources. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single well-structured paragraph that front-loads the purpose, then covers return values, usage guidelines, exclusions, and sources. Every sentence adds value without redundancies.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description fully covers return values (Compliance Score, rating categories, per-domain signals). It also explains threshold behavior and lists all six domains and sources. Complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds a bit of extra context for the 'depth' parameter by mentioning credit costs and default value, providing additional semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: assessing a UK company's regulatory compliance across multiple domains. It lists specific domains and differentiates from sibling tools by naming alternatives for financial risk (uk_entity_intelligence) and director-level risk (uk_director_intelligence).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (pre-acquisition due diligence, supplier checks, ESG) and when not to (financial risk → use uk_entity_intelligence; director risk → use uk_director_intelligence), providing clear guidance for selecting this tool over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_connectivity_intelligenceB
Read-onlyIdempotent
Inspect

Evaluate digital infrastructure for any UK postcode. Returns broadband speeds (mean and median), superfast (>30 Mbps) and ultrafast (>100 Mbps) availability, FTTP coverage, mobile 4G indoor and outdoor coverage by operator (EE, Three, O2, Vodafone) with signal strength, and a Digital Readiness Score (0-100). Use this tool to assess remote working viability, evaluate business premises connectivity, or compare digital infrastructure between locations. For physical transport links (rail, bus, roads), use uk_transport_intelligence instead. Source: Ofcom Connected Nations.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: broadband speeds and availability only. standard (default): adds mobile 4G coverage by operator and Digital Readiness Score. full: adds per-operator signal strength breakdown, FTTP availability, and ultrafast metrics.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Connectivity data is returned for the exchange and mobile cell area serving this postcode.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral aspects. The description mentions data sources (Ofcom Connected Nations) and score range (0-100) but does not disclose critical traits: Are queries cached? Is the data updated monthly? Are there usage limits or API key requirements? For a data intelligence tool, such behavioral details are important for proper invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise: two sentences that efficiently communicate the tool's purpose, key data points, and data source. No extraneous information, and key details are front-loaded. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has a simple schema (2 params, 1 postcode required) and no output schema. The description covers the data returned and depth levels. It is complete for basic usage, but lacks behavioral details (caching, update frequency). Given low complexity, the description is nearly sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already explains both parameters well. The description adds value by summarizing the depth levels (summary = broadband only, etc.) but does not add meaning beyond the schema. Given full schema coverage, a baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides UK digital connectivity intelligence including broadband speeds, availability, mobile 4G coverage by operator, and a Digital Readiness Score. It distinguishes itself from sibling tools (other uk_*_intelligence tools) by specifying the domain of connectivity. However, it could be more precise about the action (e.g., 'retrieve' or 'get') and the specific resource (postcode-based connectivity data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage via the depth parameter, which controls the breadth of data returned, but provides no guidance on when to use this tool versus other uk_*_intelligence tools. No alternatives or exclusions are mentioned, but the context of sibling tools suggests domain-specific usage. A clear usage scenario (e.g., 'Use for site selection or property assessments') would improve this score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_demographics_intelligenceB
Read-onlyIdempotent
Inspect

Analyse population demographics and deprivation for any UK postcode at LSOA level. Returns Census 2021 data (population, age, ethnicity, housing tenure, economic activity), Index of Multiple Deprivation with ranks and deciles across all seven domains, Nomis labour market statistics, and a Consumer Spending Power Index (0-100). Use this tool for retail site selection, social impact assessment, or grant applications requiring deprivation evidence. For broader area profiling (crime, flood, food hygiene), use uk_location_intelligence instead. For market opportunity analysis with competitor data, use uk_market_sizing instead. Sources: ONS Census 2021, Nomis, MHCLG IMD.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: labour market statistics only. standard (default): adds Census 2021 population and housing data, IMD ranks, and Consumer Spending Power Index. full: adds detailed Census breakdowns by age band, ethnicity, tenure type, and economic activity category.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Data is returned for the Lower Super Output Area (LSOA) containing this postcode.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for behavioral disclosure. It does not mention any destructive or read-only behavior, rate limits, or data update frequency. The tool likely reads data, but this is not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loaded with the tool's purpose. It lists data sources, which is helpful but could be integrated into the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could mention return format or example usage. It adequately covers the tool's scope and sources but leaves ambiguity about what the output looks like.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds value by explaining the 'depth' enum values: summary, standard, and full with brief definitions. This clarifies the parameter beyond the schema's enumeration.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides UK demographics data from Census 2021 and other official sources, listing specific data categories like population, housing, and economic activity. However, it does not differentiate from sibling tools such as uk_education_intelligence or uk_health_intelligence, which cover other domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Sibling tools like uk_location_intelligence or uk_property_intelligence might overlap, but no exclusions or comparisons are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_director_intelligenceA
Read-onlyIdempotent
Inspect

Investigate a company director's full appointment history and risk profile. Returns all current and past directorships with dates, disqualification status, company portfolio analysis (active, dissolved, liquidated counts), Gazette notices, and a Director Risk Score (0-100) based on dissolution rate, disqualification history, and filing compliance. Use this tool to vet a director before appointment, conduct KYC/KYB checks, or investigate connected company networks. For company-level data (not individual directors), use uk_entity_intelligence instead. Sources: Companies House Officers API, Companies House Disqualifications, The Gazette.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: search results, disqualification check, and top 5 appointments. standard (default): adds Gazette notices, full appointment list, and Director Risk Score. full: adds filing compliance check across the director's active company portfolio.standard
name_or_idYesDirector name (e.g. "Ken Murphy") for a ranked search, or a Companies House officer ID (e.g. "abc123DEF") for exact lookup.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes the output comprehensively but does not disclose behavioral traits like data freshness, rate limits, or authorization requirements. The description mentions sources but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, tightly packed with information. Each sentence adds value: first defines purpose and capability, second lists data sources and outputs. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is no output schema, the description adequately explains all returned data types. However, it lacks details on pagination, response format details, and error conditions. Still, for a intelligence search tool, it covers key aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters well. The description adds value by explaining how the depth parameter controls the breadth of data (summary vs standard vs full) and gives concrete examples for name_or_id, which helps an agent formulate queries.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches by name or officer ID for UK director intelligence, and lists all data returned (directorships, disqualifications, portfolio analysis, Gazette notices, risk score). It distinguishes itself from sibling tools, all of which serve different intelligence domains (connectivity, demographics, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for investigating UK directors, which is clear from the name and content. However, it does not explicitly state when to use this tool instead of others like uk_entity_intelligence, nor does it mention when not to use it or provide alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_due_diligence_reportA
Read-onlyIdempotent
Inspect

Generate a company due diligence report with an opinionated verdict: PROCEED, PROCEED_WITH_CAUTION, ENHANCED_DUE_DILIGENCE, or DO_NOT_ENGAGE. Returns a Corporate Distress Score (0-100), categorised red and green flags with severity ratings, governance quality assessment, beneficial ownership transparency check, and an actionable recommendation. Use this tool for a quick go/no-go decision on a supplier, partner, or acquisition target. For raw company data without a verdict, use uk_entity_intelligence instead. Sources: Companies House, FCA Register, Charity Commission, The Gazette, Contracts Finder.

ParametersJSON Schema
NameRequiredDescriptionDefault
identifierYesCompany number (e.g. "00000006") or company name. Numbers are matched exactly; names trigger a ranked search.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it synthesizes a report with specific verdict categories (PROCEED, etc.), includes components like Corporate Distress Score, red/green flags, governance assessment, and actionable recommendations. It also mentions cost implications (replacing Creditsafe reports) and data sources, adding valuable context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and verdict details, then listing report components, cost comparison, and sources. Every sentence adds essential information without redundancy, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of generating a due diligence report with multiple components and no output schema or annotations, the description is largely complete. It outlines the report's structure, verdict options, and data sources. However, it lacks details on output format or potential limitations, which could be helpful for an agent to understand what to expect from the tool's response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'identifier' clearly documented as a company number or name. The description does not add any additional meaning or clarification about this parameter beyond what the schema provides, so it meets the baseline score of 3 for high schema coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: synthesizing a company due diligence report with an opinionated verdict. It specifies the verb 'synthesized' and resource 'company due diligence report', and distinguishes itself from sibling tools by focusing on due diligence rather than intelligence or risk assessments in other domains like education or transport.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning it replaces £5-25 Creditsafe reports and lists sources (Companies House, FCA, etc.), suggesting it's for UK company due diligence. However, it does not explicitly state when to use this tool versus alternatives or provide any exclusions, leaving the agent to infer appropriate scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_education_intelligenceA
Read-onlyIdempotent
Inspect

Find schools near any UK postcode with Ofsted ratings, school type (academy, maintained, free, independent), phase, age range, pupil numbers, and distance. Results are sorted by proximity. Use the phase parameter to filter to primary or secondary schools only. Use this tool for relocation advice, catchment area research, or comparing local education quality. This tool covers schools only; for broader area profiling, use uk_location_intelligence. Sources: DfE Get Information About Schools (GIAS), Ofsted Inspection Outcomes.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail and search radius. summary: nearest 5 schools within 1 km. standard (default): schools within 2 km with Ofsted ratings. full: extended catchment area with detailed school profiles.standard
phaseNoOptional filter by school phase. When omitted, all phases are returned.
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Schools are returned sorted by distance from this postcode.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the data sources but doesn't disclose behavioral traits like rate limits, authentication needs, error handling, or what happens with invalid postcodes. For a tool with no annotations, this leaves significant gaps in understanding how it behaves operationally.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficient: it states the core function, key data points, and sources in a single, clear sentence. Every word earns its place, with no redundant information or unnecessary elaboration, making it easy to grasp quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is somewhat complete but has gaps. It covers the purpose and sources well, but without annotations or output schema, it lacks details on behavioral traits and return values. This is adequate for basic understanding but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (postcode, depth, phase) with descriptions and enums. The description adds no additional meaning beyond what the schema provides, such as explaining how 'depth' affects results in practice or providing examples for 'phase'. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it finds schools near any UK postcode and provides specific data points (Ofsted ratings, type, phase, pupil numbers). It distinguishes itself from sibling tools by focusing on education intelligence rather than due diligence, energy, property, etc., and specifies the data sources (DfE GIAS + Ofsted).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (finding schools near postcodes with educational data) but doesn't explicitly state when to use this tool versus alternatives. No guidance is provided on when not to use it or what other tools might be better for different needs, such as using uk_location_intelligence for general location data without educational details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_energy_intelligenceB
Read-onlyIdempotent
Inspect

Analyse energy infrastructure and sustainability for any UK postcode. Returns real-time grid carbon intensity (gCO2/kWh) with forecast, electricity generation mix (wind, solar, gas, nuclear percentages), wholesale prices, local EPC rating distribution, and an expanded ESG Assessment Score (0-100, rated STRONG/GOOD/MODERATE/WEAK/POOR) with weighted component breakdown covering carbon intensity, renewable share, and building efficiency. Use this tool for ESG reporting, renewable energy viability, or comparing sustainability between locations. For environmental risk factors (flood, geology, radon), use uk_environmental_risk instead. Sources: National Grid ESO, Elexon BMRS, EPC Register.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: current carbon intensity only. standard (default): adds generation mix, local EPC profile, and ESG score. full: adds wholesale energy prices, carbon intensity forecast, and detailed EPC breakdown.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Energy data is returned for the grid region and local area.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It lists the types of data returned (carbon intensity, generation mix, etc.) but doesn't describe how the tool behaves: it doesn't mention whether it's a read-only operation, potential rate limits, authentication needs, error handling, or data freshness. For a tool with no annotations, this leaves significant gaps in understanding its operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in a single sentence that packs substantial information: it lists the key data outputs and sources without unnecessary words. It's front-loaded with the core purpose, and every element (data types, sources) earns its place by clarifying the tool's scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (energy/ESG data with multiple outputs), no annotations, and no output schema, the description is moderately complete. It covers what data is available and sources, but lacks details on return formats, error conditions, or behavioral traits. For a tool with rich potential outputs, it should do more to compensate for the missing structured fields, though it meets a minimum viable level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds value by implicitly explaining the 'depth' parameter: it lists the data types (carbon intensity, generation mix, wholesale prices, etc.), which aligns with the schema's enum descriptions for summary/standard/full. This provides context for what each depth level includes, enhancing understanding beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: provides UK energy and ESG intelligence including grid carbon intensity, generation mix, wholesale prices, local EPC efficiency, and ESG scores. It specifies the data sources (National Grid ESO, Elexon, EPC Register), which adds specificity. However, it doesn't explicitly distinguish this tool from its siblings (like uk_environmental_risk or uk_property_intelligence) beyond the domain focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or suggest scenarios where this tool is preferred over others (e.g., uk_environmental_risk for broader environmental data). Usage is implied by the domain focus but lacks explicit context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_entity_intelligenceB
Read-onlyIdempotent
Inspect

Look up any UK company, charity, or FCA-regulated entity by company number, charity number, or name. Returns company profile, officers, persons of significant control, filings, charges, FCA and Charity Commission regulatory status, Gazette notices, government contracts, and a Corporate Distress Score (0-100). Use this tool to verify a company, check its status, identify directors, or assess financial stability. For a go/no-go verdict on a company, use uk_due_diligence_report instead. To investigate a specific director's history across companies, use uk_director_intelligence instead. Sources: Companies House, FCA Register, Charity Commission, The Gazette, Contracts Finder.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: basic profile only. standard (default): adds officers, regulatory status, distress score. full: adds government contracts, officer network analysis, and complete filing history.standard
identifierYesCompany number (e.g. "00000006"), charity number (e.g. "1234567"), or company name. Numbers are matched exactly; names return the best ranked match.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It lists data sources and return fields but does not mention critical behavioral traits like rate limits, authentication requirements, error handling, or data freshness. This leaves significant gaps for an agent to understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and return data, the second lists sources. It is front-loaded with key information and avoids unnecessary details, though it could be slightly more concise by integrating sources into the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple data sources, rich return fields) and lack of annotations and output schema, the description is moderately complete. It outlines return data and sources but misses behavioral context and output structure details. This is adequate but has clear gaps for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional meaning beyond what the schema provides, such as examples for identifier formats beyond 'Company number' or implications of depth choices. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Look up') and the target ('UK company, charity, or regulated entity'), distinguishing it from siblings that focus on specific domains like education, health, or transport. It specifies the comprehensive data returned, making the purpose explicit and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings, such as uk_due_diligence_report or uk_legal_intelligence, which might overlap in risk or regulatory aspects. It lacks explicit when-to-use or when-not-to-use instructions, leaving the agent to infer based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_environmental_riskA
Read-onlyIdempotent
Inspect

Calculate a multi-factor Environmental Risk Score (0-100) for any UK postcode. Combines flood zone classifications (rivers, surface, coastal), ground stability and shrink-swell clay hazard from BGS, radon gas probability, waterbody ecological status under the Water Framework Directive, and grid carbon intensity. Returns individual risk component scores with severity ratings and an overall weighted composite. Use this tool for land development evaluation, insurance exposure assessment, or environmental due diligence. For water-specific pollution and sewage data, use uk_water_intelligence instead. Sources: Environment Agency, BGS GeoIndex, EA Water Quality Archive, National Grid ESO.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: flood risk and geology only. standard (default): adds water quality classification and composite score. full: adds carbon intensity, detailed flood monitoring station readings, and full risk component breakdown.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Returns environmental risk data for the surrounding area.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the tool's behavior by listing the factors combined and data sources, but does not mention critical traits like rate limits, authentication needs, error handling, or whether the score is real-time or cached. The description adds some context but leaves gaps in behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific factors and sources in a concise list. Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (multi-factor risk score) and lack of annotations and output schema, the description is moderately complete. It covers the purpose, factors, and sources, but does not explain the return format, score interpretation, or potential limitations. For a tool with no output schema, more detail on the response would be beneficial, leaving some gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters well. The description does not add specific parameter semantics beyond what's in the schema, but it implicitly supports the 'depth' parameter by listing factors that align with the depth levels. Since there are only 2 parameters and schema coverage is high, a baseline of 3 is appropriate, but the description's factor list provides marginal additional context, warranting a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it calculates a 'Multi-factor Environmental Risk Score (0-100) for any UK postcode' and lists the specific factors combined (flood risk, ground stability, shrink-swell clay, radon potential, water quality, carbon intensity). It distinguishes from siblings by focusing on environmental risk rather than education, health, legal, or other intelligence areas.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for environmental risk assessment in the UK, but does not explicitly state when to use this tool versus alternatives (e.g., which sibling tools might overlap or when other tools are more appropriate). It provides context (UK postcode focus) but lacks explicit guidance on exclusions or specific scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_funding_intelligenceA
Read-onlyIdempotent
Inspect

Discover UK grants, research funding, and government contracts matching a search query. Returns grants from the 360Giving registry (funder, amount, recipient), UKRI-funded research projects (PI, institution), government contracts from Contracts Finder, and a Funding Opportunity Score (0-100). Use this tool to find grant funding, identify research collaboration opportunities, or map the funding landscape for a sector. For live government procurement tenders, use uk_tenders_intelligence instead. For company patent and R&D data, use uk_innovation_intelligence instead. Sources: 360Giving, UKRI Gateway to Research, Contracts Finder.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: matching grants only. standard (default): adds UKRI research projects, government contracts, and Funding Opportunity Score. full: adds detailed grant breakdowns and funder profiles.standard
queryYesSector, topic, or keyword (e.g. "clean energy", "AI healthcare", "social housing innovation"). Broader terms return more results across all funding sources.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must fully disclose behavioral traits. It mentions data sources and a scoring system, which is good. However, it does not state if the tool is read-only or whether it makes external calls, nor does it describe any side effects or authorization requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, covering purpose, data sources, and search capabilities in two sentences. It is front-loaded with the core function. However, the parenthetical note on the 'Funding Opportunity Score' could be integrated more naturally.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations, descriptions must compensate. While the description covers data sources and basic usage, it lacks details on output format (no output schema) and how the scoring works. For a tool with two simple params, it is adequate but could be more thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and both parameters have descriptions. The description adds value by explaining the 'depth' enum in detail, clarifying what each level includes. The 'query' parameter is also illustrated with examples, though the description could provide more nuance on expected input format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides UK grant and funding intelligence from multiple sources including 360Giving, UKRI, and Contracts Finder, with a proprietary score. The verb 'matching' and examples of search queries clarify its purpose. However, it could better distinguish from sibling tools like uk_tenders_intelligence, which likely also covers contracts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says to search 'by sector, topic, or keyword' and provides example queries, which is helpful. However, it does not explain when to use this tool over siblings like uk_tenders_intelligence or uk_innovation_intelligence, nor does it mention any prerequisites or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_health_intelligenceA
Read-onlyIdempotent
Inspect

Map the healthcare landscape around any UK postcode. Returns public health indicators (life expectancy, obesity, smoking rates, mental health, deprivation health rank), CQC-inspected providers with ratings and inspection dates (hospitals, GPs, care homes, dentists, pharmacies), GP prescribing patterns, and a Health Service Quality Score (0-100, rated EXCELLENT/GOOD/MODERATE/POOR/INADEQUATE). Use this tool for health impact assessments, care home due diligence, or evaluating local health service quality. For environmental health risks (flood, radon, pollution), use uk_environmental_risk instead. Sources: OHID Fingertips, CQC, OpenPrescribing.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: key public health indicators only. standard (default): adds CQC-rated providers with inspection outcomes. full: adds GP prescribing data and detailed health indicator breakdowns.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Healthcare data is returned for the surrounding area.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions data sources but does not describe behavioral traits such as rate limits, authentication requirements, response formats, error handling, or whether the operation is read-only or has side effects. This leaves significant gaps for an agent to understand how to interact with the tool effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of a single sentence that efficiently conveys the tool's purpose, scope, and data sources without any wasted words. Every element (e.g., 'health indicators', 'CQC-rated care providers', 'GP prescribing patterns', 'Sources') earns its place by adding critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and data sources well, but without annotations or an output schema, it lacks details on behavioral aspects (e.g., response format, errors) and does not fully compensate for the missing structured data, leaving the agent with gaps in understanding how to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description does not add specific parameter semantics beyond what the schema provides, but it implicitly supports the parameters by mentioning the data types (health indicators, CQC providers, prescribing data) that align with the 'depth' enum values. With 2 parameters and high schema coverage, a baseline of 3 is appropriate, but the description's alignment with parameter values slightly enhances understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('health indicators', 'CQC-rated care providers', 'GP prescribing patterns') and resources ('UK Healthcare Intelligence'), and it distinguishes itself from siblings by focusing exclusively on healthcare data rather than education, energy, legal, or other domains covered by sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'for any postcode' and listing data sources, but it does not explicitly state when to use this tool versus alternatives (e.g., other intelligence tools for different domains) or provide any exclusions. Usage is inferred from the purpose rather than explicitly guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_innovation_intelligenceA
Read-onlyIdempotent
Inspect

Assess a company's innovation activity and IP portfolio by name or Companies House number. Returns EPO patent filings and grants with IPC codes, UKRI research grants with funding amounts, R&D intensity analysis by SIC code, and an Innovation Score (0-100). Use this tool for investment due diligence on IP strength, benchmarking innovation against peers, or identifying patent-active competitors. For company financial health and governance data, use uk_entity_intelligence instead. For grant funding opportunities (not company-specific), use uk_funding_intelligence instead. Sources: EPO Open Patent Services, UKRI Gateway to Research, Companies House.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: patent count and SIC-based R&D analysis only. standard (default): adds UKRI research grants, full patent list, and Innovation Score. full: adds detailed patent abstracts, IPC breakdowns, and grant funding amounts.standard
identifierYesCompany number (e.g. "00000006") or company name. Numbers give exact results; names trigger a search.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavioral traits. It lists data sources and mentions depth variations but does not disclose any side effects, rate limits, or required permissions. The description is adequate but lacks depth on what 'full' depth entails beyond 'detailed patent analysis.'

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences and front-loaded with purpose. The first sentence provides a comprehensive overview and the second adds search and source context. Slightly more verbose than necessary for the enum explanation, but still efficient overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (multiple data sources, depth levels, no output schema), the description adequately covers the tool's purpose, inputs, and depth variations. However, it does not describe the output format or what the Innovation Score range means, which would be helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining that 'summary' includes patents+SIC, 'standard' adds UKRI+scoring, and 'full' adds detailed patent analysis. This semantic mapping enriches the enum options beyond the schema's terse descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool provides UK innovation intelligence combining four specific data sources (EPO patents, UKRI grants, SIC analysis, Innovation Score) with a specific search method (by company name or number). It is distinct from sibling tools which cover other domains like connectivity, demographics, etc.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for searching UK companies innovation data, but does not explicitly state when to use this tool vs alternatives. While sibling tools cover different domains, there is no guidance on scenarios where this tool is preferred or when to use different depth settings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_location_intelligenceA
Read-onlyIdempotent
Inspect

Profile any UK postcode with safety, environmental, and socioeconomic data. Returns admin hierarchy, deprivation rank, 12-month crime statistics by category, flood risk zones, food hygiene ratings, carbon intensity, labour market indicators, and health metrics. Use this tool for area assessment, relocation decisions, or neighbourhood comparison. For detailed Census demographics and IMD breakdowns, use uk_demographics_intelligence instead. For property-specific data (prices, EPC), use uk_property_intelligence instead. Sources: ONS, Police.uk, Environment Agency, Food Standards Agency, Nomis, OHID Fingertips.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: postcode lookup and admin hierarchy only. standard (default): adds crime statistics, flood risk, food hygiene. full: adds carbon intensity, labour market data, and health indicators.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA" or "SW1A1AA"). Partial postcodes are not supported.
radius_mNoSearch radius in metres for nearby data such as crime and food hygiene.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It lists data sources but does not disclose behavioral traits such as rate limits, authentication needs, data freshness, or error handling. The description is informative about what data is returned but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a detailed but efficient list of data types and sources. Every sentence adds value without redundancy, making it appropriately sized for a complex tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no output schema, no annotations), the description is moderately complete. It covers what data is returned and sources, but lacks details on output format, error cases, or performance characteristics, which are important for a data-rich tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by summarizing the data categories returned, which helps contextualize the 'depth' parameter's enum values (e.g., 'full' includes carbon, labour, health). However, it does not explain parameter interactions or provide examples beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Profile') and resource ('UK postcode'), and lists the comprehensive data types returned. It distinguishes itself from siblings by focusing on location-based intelligence rather than specialized domains like education, health, or property.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for profiling UK postcodes with various data types, but does not explicitly state when to use this tool versus alternatives like uk_property_intelligence or uk_environmental_risk. It mentions sources but lacks guidance on prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_market_sizingB
Read-onlyIdempotent
Inspect

Analyse market opportunity for any UK postcode with optional sector filtering. Returns a Market Opportunity Score (0-100), labour market data (earnings, employment rates, claimant count), property market context (average prices, transaction volumes, price trends), and competitor density by SIC code. When a sector keyword is provided, returns counts of local competing businesses. Use this tool for feasibility studies, franchise territory analysis, or comparing commercial potential between locations. For Census demographics and deprivation data, use uk_demographics_intelligence instead. Sources: ONS Census 2021, Nomis, HM Land Registry, Companies House.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: labour market data only. standard (default): adds property market context, competitor counts, and opportunity score. full: adds detailed competitor company profiles and extended market indicators.standard
sectorNoOptional SIC code or sector keyword for competitor analysis (e.g. "62020", "restaurant", "construction"). When omitted, competitor analysis is skipped.
postcodeYesFull UK postcode (e.g. "M1 1AA"). Market analysis is centred on this location.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses data sources (ONS Census, Nomis, Land Registry, Companies House) which adds useful context about reliability and scope. However, it doesn't mention behavioral traits like rate limits, authentication needs, response time, error conditions, or whether the analysis is cached/real-time. For a tool with no annotations, this leaves significant gaps in understanding how it behaves operationally.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: the first states purpose and returns, the second lists sources. It's front-loaded with key information and avoids redundancy. However, the second sentence about sources could be integrated more smoothly, and there's minor room to tighten phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides good purpose and data scope but lacks operational context. It covers what the tool does and sources, but doesn't explain return format, error handling, or usage constraints. For a 3-parameter tool with rich data returns, this is adequate but has clear gaps in guiding effective agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds marginal value by mentioning 'optional sector' and implying the postcode is for geographic targeting, but doesn't provide additional semantics beyond what's in the schema (e.g., format examples for sector beyond SIC codes, practical implications of depth levels). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'Market opportunity analysis for any UK postcode and optional sector' and lists the specific data returned (Market Opportunity Score, labour market data, property market context, competitor density). It distinguishes from siblings by focusing on market sizing rather than due diligence, education, energy, etc. However, it doesn't explicitly contrast with the most similar sibling 'uk_location_intelligence' which might overlap in geographic analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'Market opportunity analysis' and mentions optional sector filtering, suggesting when to include the sector parameter. However, it provides no explicit guidance on when to use this tool versus alternatives like 'uk_location_intelligence' or other UK intelligence tools, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_planning_intelligenceA
Read-onlyIdempotent
Inspect

Planning and development intelligence for any UK postcode. Returns planning application history (approved/refused/pending/withdrawn counts), conservation areas, listed buildings by grade, Green Belt status, Article 4 directions, brownfield data, flood risk, council approval rates with portal URL, environmental designations (SSSI, AONB), mining risk, and a Development Score (0-100) from HIGHLY_CONSTRAINED to HIGHLY_FAVOURABLE. Use this tool to assess planning permission potential or prepare application strategy. For property-level data (prices, EPC), use uk_property_intelligence instead. Sources: Planning Data Platform, Natural England, MHCLG, Environment Agency, Coal Authority.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: key constraints, conservation area, flood risk, and development score. standard (default): adds planning application breakdown, environmental designations, council stats with portal URL. full: adds mining risk, detailed brownfield data, and full application history.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Returns planning data for the surrounding area.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It mentions data sources and includes a 'depth' parameter controlling output richness, but does not disclose whether the tool is read-only, any rate limits, or what happens if the postcode is invalid.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph but packed with information. It is front-loaded with the core purpose. A bullet list might improve readability, but it remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich schema with the depth parameter and no output schema, the description provides sufficient context on what each depth level returns. It lacks details on error handling or response format, but the data sources list adds completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with descriptions for both parameters. The description adds context about the data sources and score range, but the parameter semantics are already well covered by the schema. Hence baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns UK planning intelligence for any postcode, listing specific data types (planning applications, conservation areas, etc.) and a proprietary score. It distinguishes itself from siblings like uk_property_intelligence by focusing on planning-specific constraints and approvals.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests use for assessing development viability at a given postcode. It does not explicitly mention when not to use it or name alternative siblings, but the context of postcode-based planning intelligence is clear from the tool name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_political_intelligenceA
Read-onlyIdempotent
Inspect

Get political representation and civic engagement data for any UK postcode. Returns the current MP with party affiliation, majority size, and tenure, active parliamentary petitions with constituency-level signature counts, and a Political Engagement Index measuring civic participation relative to the national average. Use this tool for political landscape research, public sentiment gauging via petition activity, or stakeholder engagement preparation. Sources: UK Parliament Members API, UK Parliament Petitions API.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: current MP profile only. standard (default): adds top petitions with local signature counts and Political Engagement Index. full: adds detailed petition analysis and historical MP data.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Returns data for the parliamentary constituency containing this postcode.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It names data sources (UK Parliament APIs), implying external dependencies. However, it does not disclose latency, rate limits, or whether petitions data is real-time. The 'full' depth hints at detailed analysis but lacks specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states what the tool does, second lists sources. Front-loaded with essential info, zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description explains the return structure for each depth level. The tool is moderate in complexity; the description covers core outputs but omits optional details like petition count ranges or engagement index scale.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds context by explaining the depth levels (summary, standard, full) with concrete outputs (MP profile, petitions, engagement, detailed analysis). This meaningfully supplements the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool provides MP details, parliamentary petitions, and a Political Engagement Index for a given UK postcode, naming the data sources. It is distinct from sibling tools like uk_demographics_intelligence or uk_entity_intelligence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for querying political data by postcode, and the depth parameter allows narrowing scope. However, it does not explicitly state when to use this tool versus alternatives, though the unique set of data (MP, petitions, engagement index) differentiates it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_property_intelligenceA
Read-onlyIdempotent
Inspect

Property due diligence for any UK postcode. Returns HM Land Registry price-paid history with area averages, EPC energy performance certificates (90+ fields including ratings, wall/roof/heating descriptions, CO2 emissions), planning constraints, and an Environmental Risk Score (0-100) combining flood, ground stability, radon, and crime. Use this tool for property purchase research, conveyancing preparation, buy-to-let analysis, or rental market assessment. For broader area profiling (demographics, schools, transport), use uk_location_intelligence instead. Sources: HM Land Registry, EPC Register, Planning Data Platform, Environment Agency, BGS, Police.uk.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: price history and flood risk only. standard (default): adds EPC data, planning constraints, crime statistics. full: adds geology, radon, and detailed environmental risk scoring with component breakdown.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Returns data for all properties in this postcode area.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by mentioning data sources (Land Registry, EPC Register, etc.) and the tool's practical value (replaces conveyancing searches). However, it doesn't disclose important behavioral aspects like rate limits, authentication requirements, error handling, or response format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: it starts with the core purpose, lists specific outputs, explains practical value, and cites data sources. Every sentence adds value with no wasted words, and information is front-loaded appropriately for agent comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (comprehensive property intelligence with multiple data sources) and no output schema, the description does a reasonable job but has gaps. It lists output categories but doesn't describe the return structure, format, or potential limitations. For a tool with such rich functionality, more detail about response organization would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining what different depth levels return: 'summary = prices + flood. standard = adds EPC, planning, crime. full = adds geology, detailed risk scoring.' This provides meaningful context beyond the schema's enum values, though it doesn't fully explain the 'postcode' parameter beyond what the schema states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Complete property due diligence for a UK postcode' with specific outputs listed (price history, EPC profile, planning constraints, Environmental Risk Score). It distinguishes from siblings by focusing on comprehensive property intelligence, unlike more specialized tools like uk_energy_intelligence or uk_environmental_risk.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for property due diligence on UK postcodes, replacing conveyancing searches. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, though the context implies it's for comprehensive property assessment.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_stamp_duty_calculatorA
Read-onlyIdempotent
Inspect

Calculate UK Stamp Duty Land Tax (SDLT) for England and Northern Ireland property purchases. Returns total tax due, effective tax rate, and a breakdown by band. Includes first-time buyer relief (nil rate up to £425,000 on properties up to £625,000) and additional property surcharge (+5% for buy-to-let and second homes) at 2025/26 rates. Use this tool when a user asks about stamp duty, SDLT, property purchase tax, or how much tax they'll pay on a house purchase. For full property due diligence including EPC, flood risk, and planning constraints, use uk_property_intelligence instead. Source: HMRC SDLT Rates.

ParametersJSON Schema
NameRequiredDescriptionDefault
priceYesProperty purchase price in GBP (e.g., 350000 for a £350,000 property).
buyer_typeNoBuyer category. standard: normal purchase. first-time: first-time buyer (eligible for relief on properties up to £625,000). additional: additional property such as buy-to-let or second home (+5% surcharge).standard
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds behavioral context: includes first-time buyer relief and additional property surcharge details, and notes rates are for 2025/26. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is very concise: 4 sentences covering purpose, return values, reliefs, usage guidance, and source. No redundant words, well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculator with 2 params and no output schema, the description explains what is returned (total tax, effective rate, breakdown) and key reliefs. It also states the source (HMRC). Could be more explicit about residential-only scope, but the reliefs imply residential. Overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3. The description adds extra meaning: price is in GBP, buyer_type options explained with thresholds (e.g., first-time: nil rate up to £425k on properties up to £625k, additional: +5% surcharge). This goes beyond the schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates UK Stamp Duty Land Tax (SDLT) for England and Northern Ireland, returns total tax, effective rate, and breakdown by band. It distinguishes from sibling 'uk_property_intelligence' by specifying that this tool is for tax calculation only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use: when user asks about stamp duty, SDLT, property purchase tax, or tax on house purchase. Also tells when not to use: for full property due diligence, use uk_property_intelligence. This is clear and helpful.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_tenders_intelligenceC
Read-onlyIdempotent
Inspect

Search open UK government procurement opportunities and recently awarded public contracts from the last 90 days. Returns up to 30 tenders with title, contracting authority, estimated or awarded value in GBP, deadline, status (open, awarded, closed), and a Procurement Opportunity Score (0-100, rated EXCELLENT/STRONG/MODERATE/LIMITED/POOR) assessing opportunity quality. Both parameters are optional; when omitted, returns the most recent tenders across all sectors and regions. Use this tool to find public sector sales opportunities, track government spending, or research competitor contract wins. For grants and research funding (non-procurement), use uk_funding_intelligence instead. For import/export duties, use uk_trade_intelligence instead. Sources: Contracts Finder, Find a Tender Service (FTS).

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: top 5 tenders with summary statistics only. standard (default): all matching tenders with buyer and deadline details. full: adds direct links to tender notices and value distribution analysis.standard
regionNoUK region to filter tenders (e.g. "London", "North West", "Scotland", "Wales"). When omitted, returns tenders from all regions.
sectorNoSector keyword to filter tenders (e.g. "IT", "construction", "healthcare", "facilities management"). When omitted, returns tenders across all sectors.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions data sources (Contracts Finder, Find a Tender) which adds some context, but doesn't describe what the tool actually returns (e.g., format, structure, limitations), whether it requires authentication, rate limits, or how it handles the optional filtering parameters. For an intelligence tool with no output schema, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded with all essential information in a single sentence. Every word earns its place by specifying the domain, data types, and sources without any fluff or redundancy. The structure efficiently communicates the tool's scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations and no output schema, the description is incomplete for proper agent usage. While it specifies the domain and sources, it doesn't explain what kind of intelligence is returned (e.g., reports, lists, analytics), how results are structured, or any limitations. For a 2-parameter intelligence tool with no structured output documentation, this leaves too many gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both optional parameters (region and sector) with examples. The description doesn't add any parameter-specific information beyond what's in the schema. With high schema coverage and no parameters mentioned in the description, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides intelligence on UK public procurement including open tenders, awarded contracts, buyer profiles, and average values, with specific data sources named. It uses specific verbs like 'intelligence' and identifies the resource domain. However, it doesn't explicitly distinguish this from sibling tools like 'uk_market_sizing' or 'uk_trade_intelligence' that might overlap in economic data analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the 13 sibling tools listed. It doesn't mention alternatives, exclusions, or specific contexts where this procurement intelligence tool is preferred over other UK intelligence tools. The user must infer usage from the title alone without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_trade_intelligenceB
Read-onlyIdempotent
Inspect

Look up UK import duty rates, preferential trade agreement rates, and tariff quotas for any commodity. Accepts an HS commodity code (e.g. '0201' for beef) or a plain-text product description matched to the nearest tariff heading. Returns MFN duty rate, preferential rates by country or trade agreement, quota counts, total trade measures, and a proprietary Tariff Impact Score (0-100, rated FAVOURABLE/MODERATE/COMPLEX/RESTRICTIVE/PROHIBITIVE) assessing overall tariff burden. Use this tool for landed cost calculations, trade compliance research, or post-Brexit tariff comparisons. This tool covers customs duties only; for government procurement contracts, use uk_tenders_intelligence instead. Source: HMRC Trade Tariff API.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: commodity match, MFN duty rate, and VAT only. standard (default): adds preferential rates (up to 10 countries), quota count, and total measures. full: adds all preferential rates without limit.standard
commodityYesHS commodity code (e.g. "0201" for beef, "8471" for computers) or plain-text product description (e.g. "fresh strawberries"). Codes are matched exactly; text triggers a search.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what data is retrieved (duty rates, preferential rates, quotas, trade volumes) and the source, but doesn't cover critical aspects like whether this is a read-only operation, potential rate limits, authentication requirements, error handling, or response format. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured in a single sentence. It front-loads the core purpose ('UK Trade & Customs Intelligence'), lists key data types, specifies the target ('for any commodity'), and cites the source. Every word contributes meaning without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is minimally complete. It covers what the tool does and the data source, but lacks details on behavioral traits, usage context, and output expectations. Without annotations or an output schema, the description should ideally provide more context about the operation's nature and results to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'commodity' documented as accepting commodity codes or product descriptions. The description adds no additional parameter semantics beyond what the schema provides, such as examples beyond '0201' or formatting details. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving UK trade and customs intelligence including duty rates, preferential rates, quotas, and trade volumes for commodities. It specifies the source (HMRC Trade Tariff) and distinguishes itself from siblings by focusing on trade intelligence rather than due diligence, education, energy, or other domains. However, it doesn't explicitly contrast with a specific alternative tool for trade data, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, exclusions, or specific contexts for application. While it implies usage for commodity-related trade queries, it lacks explicit instructions on when to choose it over other tools or what scenarios it's best suited for.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_transport_intelligenceB
Read-onlyIdempotent
Inspect

Assess physical transport connectivity for any UK postcode. Returns nearest railway stations with walking distance and train operating companies, bus stops with route numbers and operators, road traffic flow counts, and a composite Connectivity Score. Use this tool for commuter access evaluation, logistics planning, or comparing transport links between locations. For digital connectivity (broadband, mobile coverage), use uk_connectivity_intelligence instead. Sources: DfT NaPTAN, DfT Traffic Counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: nearest railway stations only. standard (default): adds bus stops, route information, and connectivity score. full: adds road traffic flow data and detailed operator information.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Transport data is returned for the surrounding area.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but lacks behavioral details. It doesn't disclose rate limits, authentication needs, data freshness, error handling, or what 'connectivity score' means operationally. The source attribution (DfT NaPTAN) is helpful but insufficient for full transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, efficiently conveying the core purpose in one sentence with a source attribution. Every word earns its place without redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with full schema coverage but no output schema or annotations, the description is minimally adequate. It covers the what but lacks details on return format, error cases, or practical usage context that would help an agent invoke it correctly without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description doesn't add meaningful parameter semantics beyond what's in the schema—it mentions data types but doesn't explain parameter interactions or provide examples beyond the schema's enum descriptions for 'depth'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides intelligence on transport and connectivity for UK postcodes, listing specific data types (nearest stations, bus stops, traffic flow, connectivity score). It distinguishes from siblings by focusing on transport, but doesn't explicitly contrast with similar tools like 'uk_location_intelligence' which might overlap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives is provided. The description doesn't mention sibling tools or other contexts where this tool is preferred, nor does it specify prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_vat_validationA
Read-onlyIdempotent
Inspect

Validate a UK VAT registration number against HMRC's official register. Returns whether the VAT number is valid, the registered company name, and registered business address. Use this tool to verify a supplier or customer's VAT registration, confirm trading names match, or check VAT status before issuing invoices. For full company due diligence (officers, filings, distress score), use uk_entity_intelligence instead. Source: HMRC VAT Check API.

ParametersJSON Schema
NameRequiredDescriptionDefault
vat_numberYesUK VAT registration number — 9 digits, optionally prefixed with "GB" (e.g., "123456789" or "GB123456789"). Spaces and hyphens are stripped automatically.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint. Description adds value by specifying data source (HMRC) and return fields (validity, company name, address). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise, front-loaded sentences with a source citation. No redundant information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description lists return values (validity, company name, address). Single-parameter tool is fully covered with no gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Single parameter vat_number has full schema description covering format and auto-stripping. Description doesn't add new info beyond schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb 'validate', the resource 'UK VAT registration number', and the data source 'HMRC's official register'. It lists specific return values (validity, company name, address) and distinguishes from sibling tool 'uk_entity_intelligence'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: verify supplier/customer VAT registration, confirm trading names, check VAT status before invoicing. Also provides alternative: use uk_entity_intelligence for full due diligence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_vehicle_intelligenceC
Read-onlyIdempotent
Inspect

Retrieve MOT test history for any UK-registered vehicle. Returns test results (pass/fail/refused), advisory notices, failure items, recorded mileage at each test, a structured Vehicle Health Assessment (0-100, rated EXCELLENT/GOOD/FAIR/POOR/CRITICAL) with component scoring, and enhanced mileage anomaly detection with severity grading (none/minor/major). Use this tool to check a used vehicle before purchase, verify mileage consistency, or assess fleet condition. Returns an error if the registration is not found or has no MOT history (e.g. vehicles under 3 years old). Source: DVSA MOT History API.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: vehicle profile, latest MOT test, and health assessment only. standard (default): last 10 MOT tests with mileage trend. full: complete MOT history (all tests) with mileage anomaly detection and severity grading (none/minor/major).standard
registrationYesUK vehicle registration number (e.g. "AB12CDE" or "AB12 CDE"). Case insensitive, spaces are ignored.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It lists the types of data returned (MOT history, mileage trend, etc.) but doesn't disclose critical behavioral traits: whether this is a read-only operation, potential rate limits, authentication requirements, data freshness, or error conditions. For a data retrieval tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in a single sentence that lists key data points and the source. It's appropriately sized for a single-parameter tool, though it could be slightly more front-loaded by starting with the action verb. There's no wasted text, and every element serves to inform the user about what data is available.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (retrieving multiple data points about vehicles), no annotations, and no output schema, the description is minimally complete. It outlines what data is returned but doesn't provide format details, error handling, or usage constraints. For a tool with richer data output, more context would be helpful, but it meets basic requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the single parameter 'registration' clearly documented as a UK vehicle registration number with an example. The description doesn't add any parameter-specific information beyond what the schema provides, which is acceptable given the high schema coverage. The baseline score of 3 reflects adequate but minimal value addition from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves UK vehicle intelligence data including MOT history, mileage trend, advisories, failures, and Vehicle Health Score. It specifies the source (DVSA MOT History) and distinguishes itself from sibling tools by focusing on vehicle data rather than education, energy, property, etc. However, it doesn't explicitly name the action verb (e.g., 'retrieve' or 'get'), slightly reducing specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, limitations, or comparison with sibling tools like 'uk_transport_intelligence' which might have overlapping functionality. The user must infer usage from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

uk_water_intelligenceA
Read-onlyIdempotent
Inspect

Assess water quality and pollution risk for any UK postcode. Returns storm overflow discharges (spill frequency, duration, receiving waterway), bathing water classifications, waterbody ecological and chemical status under the Water Framework Directive, water company Ofwat performance metrics, and a Water Quality Score (0-100). Use this tool for environmental due diligence near rivers or coastline, pollution exposure assessment, or water company research. For broader environmental risks (flood zones, geology, radon), use uk_environmental_risk instead. Sources: EA Water Quality Archive, EA EDM Storm Overflows, EA Bathing Water Classifications, Ofwat.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoControls response detail. summary: Water Framework Directive status and storm overflow count only. standard (default): adds bathing water classifications, water company performance, and Water Quality Score. full: adds chemical status detail, full spill history with durations, and receiving waterway information.standard
postcodeYesFull UK postcode (e.g. "SW1A 1AA"). Water quality data is returned for the catchment area around this postcode.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It clearly indicates data sources and the proprietary Water Quality Score, implying read-only behavior. However, it does not disclose if results are cached or if API rate limits apply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively concise, listing key data types and sources in a single paragraph. However, it could be slightly more structured, e.g., separating usage from source attribution.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the data (multiple sources, scoring) and the lack of output schema, the description provides a good overview. It covers enough for an agent to understand the tool's scope, though it could mention result format or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds context about what each depth level includes, which is helpful but partially redundant with the schema's enum descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides UK Water Quality Intelligence, listing specific data categories (storm overflow discharges, bathing water classifications, etc.) and data sources. This distinguishes it from sibling tools like uk_environmental_risk or uk_location_intelligence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for water quality queries but does not explicitly state when to use this tool over alternatives. No guidance on when not to use it or prerequisites beyond a postcode.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources