UK Due Diligence
Server Details
UK due diligence — Companies House, Charity Commission, Land Registry, Gazette, HMRC VAT
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- paulieb89/uk-due-diligence-mcp
- GitHub Stars
- 2
- Server Listing
- UK Due Diligence
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 16 of 16 tools scored.
Each tool targets a distinct resource or action (charity, company, disqualified director, Gazette, land title, VAT, prompts), with clear boundaries between search, profile, and fetch operations. No overlapping purposes.
Most tools follow a noun_verb pattern (e.g., charity_profile, company_search), but a few use verb_noun (get_prompt, list_prompts) or single verbs (fetch, search). The inconsistency is minor and does not hinder readability.
16 tools cover a broad domain (multiple UK registers) and each serves a clear purpose. Slightly on the higher end but well-scoped for the due diligence context.
Covers major UK registers (Companies House, Charity Commission, disqualified directors, Gazette, Land Registry, VAT) and includes a meta-search. Minor gaps exist (e.g., no sanctions or credit checks), but core due diligence workflows are supported.
Available Tools
16 toolscharity_profileGet Charity ProfileARead-onlyIdempotentInspect
Fetch the full Charity Commission profile for a charity number.
Returns trustees, latest income/expenditure, insolvency flags, governing document type, classifications, and countries of operation. Use charity_search first to find the charity number.
| Name | Required | Description | Default |
|---|---|---|---|
| charity_number | Yes | Charity Commission registration number (e.g. '1234567'). Returned by charity_search. |
Output Schema
| Name | Required | Description |
|---|---|---|
| address | No | Registered address of the charity (joined address lines). |
| insolvent | No | True if the charity is flagged as insolvent. |
| reg_status | No | Registration status code ('R', 'RM'). |
| charity_name | No | Registered charity name. |
| charity_type | No | Charity type. |
| latest_income | No | Latest filed annual income in GBP. |
| trustee_names | No | Trustees on record. Truncated to 30 entries. |
| charity_number | Yes | Charity registration number. |
| who_what_where | No | Who/What/Where classification entries. The list may be truncated truncated to 50 entries. |
| reg_status_label | No | Human-readable registration status. |
| in_administration | No | True if the charity is in administration. |
| latest_expenditure | No | Latest filed annual expenditure in GBP. |
| trustee_names_total | No | Total trustees upstream before truncation. |
| date_of_registration | No | Date of first registration. |
| who_what_where_total | No | Total classification entries upstream before truncation. |
| charity_co_reg_number | No | Companies House number for charities also registered as companies (Charitable Incorporated Organisations, etc.). |
| countries_of_operation | No | Countries the charity operates in (capped at 10 upstream). |
| trustee_names_truncated | No | True if the trustee list was truncated. |
| who_what_where_truncated | No | True if the classification list was truncated. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds valuable context about what specific data is returned (trustees, income/expenditure, filing history, etc.) and the tool's purpose for verification, which goes beyond the safety profile indicated by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core functionality, and the second explains the returned data and use cases. Every sentence earns its place with no wasted words, making it easy to scan and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that annotations cover safety aspects (read-only, non-destructive, idempotent), schema coverage is 100%, and an output schema exists, the description provides complete context. It explains what data is returned and the tool's utility, which complements the structured fields without redundancy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters well-documented in the schema. The description does not add any additional parameter information beyond what the schema provides, such as format examples or constraints not already covered. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve'), resource ('Charity Commission profile for a registered charity'), and scope ('full' profile). It explicitly distinguishes this from sibling tools like 'charity_search' by focusing on detailed profile retrieval rather than search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('verifying charitable status and governance quality'), but does not explicitly state when not to use it or name specific alternatives among the sibling tools. It implies usage for detailed profile retrieval rather than search operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
charity_searchSearch Charity Commission RegisterARead-onlyIdempotentInspect
Search the Charity Commission register of England and Wales by name or keyword.
Returns matching charities with registration number, status, and
registration date. Use charity_profile for full details once you
have the charity number. The upstream searchCharityName endpoint
returns the full list in one shot — pagination is applied
client-side via offset/limit.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max items to return in this page. Default 20; raise to 100 for bulk views. | |
| query | Yes | Charity name or keyword to search for | |
| offset | No | Number of items to skip before this page. Default 0. |
Output Schema
| Name | Required | Description |
|---|---|---|
| limit | Yes | Max items requested for this page. |
| query | Yes | Search term applied. |
| total | Yes | Total matches returned by upstream. |
| offset | Yes | Number of items skipped before this page (client-side). |
| has_more | Yes | True if more items may exist beyond this page. Re-call with offset=offset+returned to continue. |
| returned | Yes | Items actually returned on this page. |
| charities | No | Matching charity records. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies what information is returned ('registration number, status, and activities summary') and clarifies the relationship with the charity_profile tool. While annotations cover safety (readOnlyHint=true, destructiveHint=false) and behavior (openWorldHint=true, idempotentHint=true), the description adds practical usage context without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with only two sentences that both earn their place. The first sentence states the purpose and scope, while the second provides crucial workflow guidance. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and the existence of an output schema, the description provides exactly what's needed: clear purpose, sibling differentiation, and workflow guidance. It doesn't need to explain return values since an output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all parameters are already documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3. It doesn't compensate for schema gaps because there are none to compensate for.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search the Charity Commission register'), the resource ('Charity Commission register of England and Wales'), and the method ('by name or keyword'). It distinguishes from its sibling 'charity_profile' by specifying that search returns basic information while charity_profile provides full details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when-to-use guidance: 'Use charity_profile for full details once you have the charity number.' This clearly distinguishes between this search tool and its sibling profile tool, providing a clear workflow for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_officersGet Company OfficersARead-onlyIdempotentInspect
Fetch active officers for a Companies House company number.
Returns directors, secretaries, and other active officers with appointment dates, nationality, and country of residence. Resigned officers are excluded. Pagination is handled internally — do NOT pass items_per_page or start_index; this tool takes only company_number.
| Name | Required | Description | Default |
|---|---|---|---|
| start_index | No | Ignored — all officers are returned in one call. | |
| company_number | Yes | Companies House company number (8 digits, e.g. '03782379'). Returned by company_search. | |
| items_per_page | No | Ignored — pagination is handled internally. Only accepted to avoid call failures. |
Output Schema
| Name | Required | Description |
|---|---|---|
| total | Yes | Total officers returned (filtered by include_resigned). |
| officers | No | Officer records. |
| company_number | Yes | Companies House company number. |
| include_resigned | Yes | Whether resigned officers were included in this result. |
| high_appointment_count_flag | No | Number of active officers with 10+ total appointments, or null if appointment counts were not fetched. Non-zero values are a nominee/phoenix director risk signal. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations by explaining the risk flagging for directors with high appointment counts, which is a specific behavioral trait not captured in structured annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by additional context in a second sentence. Both sentences are information-dense with zero waste, efficiently covering functionality and risk insights without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (read-only query with risk analysis), rich annotations (covering safety and idempotency), 100% schema coverage, and the presence of an output schema, the description is complete enough. It explains the tool's purpose, data returned, and key behavioral insight (risk flagging), without needing to detail parameters or return values already documented elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters. The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining the significance of 'include_resigned' or 'response_format' choices. Baseline 3 is appropriate when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List directors and officers'), the resource ('Companies House company number'), and distinguishes from siblings by focusing on officers rather than profiles, searches, or other company data. It provides a comprehensive scope of what information is returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying it's for Companies House company numbers and mentions fraud detection as a use case. However, it doesn't explicitly state when to use this tool versus alternatives like 'company_profile' or 'company_search', which are sibling tools that might overlap in some contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_profileGet Company ProfileARead-onlyIdempotentInspect
Fetch the full Companies House profile for a company number.
Returns status, registered address, SIC codes, filing compliance (overdue accounts and confirmation statement flags), and whether the company has outstanding charges. Use company_search first to find the company number.
| Name | Required | Description | Default |
|---|---|---|---|
| company_number | Yes | Companies House company number (8 digits, e.g. '03782379'). Returned by company_search. |
Output Schema
| Name | Required | Description |
|---|---|---|
| accounts | No | Accounts filing status and due dates. |
| sic_codes | No | Standard Industrial Classification codes. |
| has_charges | No | True if the company has outstanding registered charges (secured debt), derived from the /charges endpoint. A due diligence signal. |
| company_name | No | Registered company name. |
| company_type | No | Companies House company type code. |
| company_number | Yes | Companies House company number. |
| company_status | No | Current status (active, dissolved, in liquidation, etc.). |
| date_of_creation | No | Incorporation date (ISO YYYY-MM-DD). |
| confirmation_statement | No | Confirmation statement filing status and next due date. |
| registered_office_address | No | Registered office address as returned by Companies House. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide read-only, non-destructive, idempotent, and open-world hints. The description adds valuable behavioral context beyond annotations: it specifies the exact data fields returned (status, address, SIC codes, etc.) and highlights business significance ('early distress signals' for overdue accounts/high charges), which helps the agent interpret outputs meaningfully.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core purpose and key return fields, and the second adds interpretive context. Every phrase adds value without redundancy, and it's front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations (covering safety and behavior), a rich output schema (implied by 'Has output schema: true'), and 100% schema coverage, the description provides complete contextual information. It details the return content and its business relevance, making it fully adequate for agent use without needing to explain parameters or output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both parameters (company_number format and response_format options). The description doesn't add parameter-specific details beyond implying company_number is the primary input, so it meets the baseline of 3 without compensating for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve'), resource ('Companies House profile'), and scope ('full profile') with explicit differentiation from siblings like company_search (which searches) and company_officers (which focuses on officers). It goes beyond the title by specifying the data source and scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'for a specific company number,' suggesting this tool is for known entities rather than discovery. However, it doesn't explicitly state when to use alternatives like company_search (for unknown companies) or company_officers (for officer details only), leaving some inference required.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_pscGet Persons with Significant ControlARead-onlyIdempotentInspect
Fetch Persons with Significant Control (beneficial ownership) for a company.
Returns PSC entries with natures of control, nationality, and country of residence. Flags overseas corporate PSC entries as a beneficial ownership risk signal. Returns an explanatory note for widely-held PLCs with no registrable PSC.
| Name | Required | Description | Default |
|---|---|---|---|
| company_number | Yes | Companies House company number (8 digits, e.g. '03782379'). Returned by company_search. |
Output Schema
| Name | Required | Description |
|---|---|---|
| psc | No | Persons with Significant Control records. |
| note | No | Explanatory note when total=0. Typical for widely-held listed PLCs where no single person or entity holds 25%+ of shares or voting rights. |
| total | Yes | Total PSC entries returned for this company. |
| company_number | Yes | Companies House company number. |
| overseas_corporate_psc_flag | No | Number of corporate PSCs registered outside the UK. Non-zero values indicate an offshore beneficial ownership chain. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations by explaining what PSC data reveals (beneficial ownership thresholds and investigation flags). While annotations already declare readOnlyHint=true and other safety properties, the description provides domain-specific behavioral insights about what constitutes 'significant control' and investigation use cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear front-loading of the core purpose, followed by domain context that earns its place by explaining what PSC data represents and its investigative relevance. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, comprehensive annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and the presence of an output schema, the description provides complete contextual understanding. It explains the domain significance of PSC data without needing to cover technical details already in structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without providing extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve') and resource ('Persons with Significant Control for a company'), distinguishing it from siblings like company_officers or company_profile. It provides domain-specific context about what PSC data represents, which helps differentiate its purpose from other company-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by explaining that PSC data reveals beneficial ownership and is a key flag in investigations, suggesting when this tool would be relevant. However, it doesn't explicitly state when to use this tool versus alternatives like company_officers or provide explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_searchSearch Companies HouseARead-onlyIdempotentInspect
Search the Companies House register by company name or keyword.
Returns a paginated list of matching companies with name, number, status, SIC codes, incorporation date, and registered address. Use company_profile for the full record once you have the company number. Re-call with start_index=start_index+items_per_page to fetch the next page.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Company name or keyword to search for | |
| start_index | No | Pagination offset. Default 0. | |
| company_type | No | Filter by company type (e.g. 'ltd', 'llp'). Omit to search all. | |
| company_status | No | Filter by company status (e.g. 'active', 'dissolved'). Omit to search all. | |
| items_per_page | No | Number of results to return (max 100). Default 20. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | Matching companies. Use the `company_number` field to call company_profile, company_officers, or company_psc for full detail. |
| query | Yes | The query string that was searched. |
| has_more | Yes | True if more results exist beyond this page. Re-call with start_index=start_index+items_per_page to fetch the next page. |
| returned | Yes | Number of items actually returned on this page. |
| start_index | Yes | Number of results skipped before this page (upstream start_index). |
| total_results | Yes | Total matching companies in Companies House (server-side). |
| items_per_page | Yes | Page size requested from the API for this call. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it discloses the paginated nature of results, specifies the exact fields returned in search results, and mentions the response format options. While annotations cover safety (readOnlyHint=true, destructiveHint=false), the description provides operational details not captured in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with two focused sentences: the first explains the tool's purpose and scope, the second provides clear usage guidance. Every word earns its place with zero redundancy or wasted space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and existence of an output schema, the description provides exactly what's needed: clear purpose, differentiation from siblings, and key behavioral details about pagination and result fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all 6 parameters. The description mentions 'company name or keyword' which aligns with the query parameter but doesn't add meaningful semantic context beyond what's already in the well-documented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search the Companies House register'), resource ('companies'), and scope ('by company name or keyword'). It distinguishes from sibling tools like 'company_profile' by indicating this is a search tool rather than a detailed profile tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when-to-use guidance: 'Use company_profile for the full record once you have the company number.' This clearly distinguishes this search tool from its sibling profile tool and provides a clear workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
disqualified_profileGet Disqualified Director ProfileARead-onlyIdempotentInspect
Fetch the full disqualification record for a director by officer ID.
Returns all disqualification orders: reason, Act/section cited, disqualification period, and associated company names. Use disqualified_search first to find the officer ID.
| Name | Required | Description | Default |
|---|---|---|---|
| officer_id | Yes | Companies House officer ID. Returned by disqualified_search. |
Output Schema
| Name | Required | Description |
|---|---|---|
| name | No | Officer name. |
| surname | No | Family name, if split upstream. |
| forename | No | Given name, if split upstream. |
| officer_id | Yes | Companies House officer ID looked up. |
| nationality | No | Declared nationality. |
| officer_kind | Yes | Which CH endpoint returned the record: 'natural' (individual) or 'corporate' (legal entity). |
| date_of_birth | No | Date of birth on record. |
| disqualifications | No | All disqualification orders attached to this officer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond this by explaining the dual endpoint strategy (natural person then corporate officer) and specifying the source of officer_id, which enhances behavioral understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by return details and usage notes in subsequent sentences. Each sentence adds value without redundancy, making it efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (retrieving detailed disqualification records), the description is complete with purpose, usage context, and behavioral notes. Annotations cover safety and idempotency, and an output schema exists, so the description appropriately focuses on adding value without needing to explain return values or repeat structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (officer_id and response_format). The description adds minimal semantics by mentioning officer_id comes from disqualified_search results, but this is already hinted in the schema. No additional parameter details are provided, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'full disqualification record for a disqualified director', specifying it returns all disqualification orders with details like reason, Act and section, period, associated companies, and undertaking details. It distinguishes from sibling tools like 'disqualified_search' by focusing on retrieving detailed records rather than searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by stating 'The officer_id comes from the disqualified_search results' and mentions trying 'the natural person endpoint first, then the corporate officer endpoint', which guides when to use this tool. However, it does not explicitly state when not to use it or name alternatives beyond the implied disqualified_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
disqualified_searchSearch Disqualified DirectorsARead-onlyIdempotentInspect
Check whether a named individual is banned from acting as a UK company director.
Use this tool when asked to check disqualified, banned, or barred directors. Query must be an individual's name (e.g. "Richard Howson") — NOT a company name, which always returns zero results.
Returns names, dates of birth, disqualification period snippets, and officer IDs that can be used with disqualified_profile for full details.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Name of the person to search for | |
| start_index | No | Pagination offset (0-based). Default 0. | |
| items_per_page | No | Results per page (max 100). Default 20. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | Matching disqualified officer records. |
| query | Yes | Search query applied. |
| has_more | Yes | True if more items may exist beyond this page. Re-call with start_index=start_index+items_per_page to continue. |
| returned | Yes | Items actually returned on this page. |
| start_index | Yes | Pagination offset for this page. |
| total_results | Yes | Total matching records upstream at Companies House. |
| items_per_page | Yes | Page size requested. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies that results include names, dates of birth, disqualification periods, and officer IDs, and that officer IDs can be used with 'disqualified_profile'. Annotations cover safety (readOnlyHint, destructiveHint) and operational traits (openWorldHint, idempotentHint), but the description enhances this by detailing the return structure and inter-tool workflow, without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by return details and usage guidelines in two additional sentences. Every sentence adds value: the first defines the tool, the second specifies outputs and links to another tool, and the third provides clear usage context. There is no wasted text, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a search operation with pagination and format options), rich annotations (readOnlyHint, openWorldHint, etc.), and the presence of an output schema, the description is complete. It covers purpose, outputs, usage context, and inter-tool relationships without needing to detail parameters or return values, which are handled by structured fields. This provides sufficient context for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all parameters (query, start_index, items_per_page, response_format). The description does not add any parameter-specific semantics beyond the schema, such as explaining search behavior or format implications. It only implies a name-based search, which is already covered in the schema's query description. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search Companies House for disqualified directors by name') and resources ('disqualified directors'), distinguishing it from siblings like 'company_search' or 'charity_search' by focusing specifically on disqualified directors. It explicitly mentions the UK context and the ability to check disqualification status, making the purpose unambiguous and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use this to check whether an individual has been disqualified from acting as a company director in the UK.' It also mentions an alternative tool ('disqualified_profile') for full details, indicating when to switch to another tool. This covers both primary use cases and alternatives clearly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetchFetch Full Record from UK Due Diligence RegisterARead-onlyIdempotentInspect
Fetch the full record for an ID returned by search.
Routes by prefix to the appropriate register:
company:{number} → Companies House full profile
charity:{number} → Charity Commission full profile
disqualification:{officer_id} → Disqualified director full record
notice:{notice_id} → Gazette notice full legal text
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Prefixed record ID returned by search. Format: company:{number}, charity:{number}, disqualification:{officer_id}, or notice:{notice_id} |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds transparency about routing behavior based on prefix, which is beyond what annotations provide. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise and front-loaded: one sentence for purpose, then bulleted list for routing. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description need not explain return values. It thoroughly covers routing logic for all prefix types, making it complete for its purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter 'id' has a detailed description with format examples. The description complements by explaining how prefixes route to different registers, adding context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Fetch' and the resource 'full record for an ID returned by search'. It distinguishes from sibling tools by explaining routing by prefix to different registers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explains when to use (after search returns IDs) and provides routing logic for different prefixes. While it doesn't explicitly state when not to use, the context is clear. Alternatives are implicit in the routing but not named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gazette_insolvencySearch Gazette Corporate Insolvency NoticesARead-onlyIdempotentInspect
Search The Gazette's insolvency notice index by entity name.
Searches the Gazette's insolvency endpoint which covers corporate notice codes: winding-up orders (2443), administration orders (2448), liquidator appointments (2452), striking-off notices (2460), and more. Results are sorted by severity — winding-up orders and administration orders appear first.
Each result includes a notice_numeric_id. Read the full legal wording via the notice://{notice_numeric_id} resource.
The Gazette is the official UK public record. A notice here means the event has been formally published and is legally effective.
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | No | Filter notices up to this date (YYYY-MM-DD) | |
| start_date | No | Filter notices from this date (YYYY-MM-DD) | |
| entity_name | Yes | Company or individual name to search for in Gazette insolvency notices | |
| max_notices | No | Cap on notices returned, applied after severity/date sort. Default 20. The Gazette insolvency feed returns up to 100 results per search — raise to 100 to see the full set. | |
| notice_type | No | Filter by notice code (e.g. '2441' winding-up petition, '2443' winding-up order, '2448' administration order, '2460' striking-off). Omit to search all. |
Output Schema
| Name | Required | Description |
|---|---|---|
| notices | No | Matching notices, sorted by severity (desc) then date (desc). |
| end_date | No | Upper bound of the date range filter, if any. |
| start_date | No | Lower bound of the date range filter, if any. |
| entity_name | Yes | Entity name that was searched. |
| total_notices | Yes | Total notices returned after deduplication, sorting, and cap. |
| max_notices_cap | Yes | The max_notices cap applied. Upstream may have more matching notices. |
| notice_type_filter | No | Notice code filter applied, or null if all codes searched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it explains that results are sorted by severity (winding-up orders first), notes The Gazette is the official UK public record, and clarifies that notices are legally effective. This enhances understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with four sentences that each add value: it states the purpose, specifies search scope, explains result sorting, and provides context about The Gazette. There is no wasted text, and information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with filtering and sorting), rich annotations (read-only, open-world, etc.), 100% schema coverage, and the presence of an output schema, the description is complete enough. It covers purpose, scope, behavior, and legal context without needing to detail parameters or return values, which are handled elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter semantics beyond the schema, such as mentioning searches by entity name and notice codes 2441-2460, but does not provide additional details on parameter usage or interactions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches The Gazette's linked-data API for corporate insolvency notices, specifying the notice codes (2441-2460) and types of notices (winding-up petitions, administration orders, etc.). It distinguishes itself from siblings by focusing on insolvency notices rather than company profiles, charity data, or other searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: searching for corporate insolvency notices in the official UK public record. It implies usage by mentioning that results are sorted by severity and that notices are legally effective. However, it does not explicitly state when not to use it or name alternatives among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gazette_noticeGet Gazette Notice Full TextARead-onlyIdempotentInspect
Fetch the full legal wording of a Gazette notice by numeric notice ID.
Returns the complete JSON-LD linked-data record for the notice: parties, legal basis, court, and full text. Use gazette_insolvency first to find notice_numeric_id values.
| Name | Required | Description | Default |
|---|---|---|---|
| notice_id | Yes | Numeric Gazette notice ID. Returned as notice_numeric_id by gazette_insolvency. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, etc. Description adds that it returns a 'complete JSON-LD linked-data record' with parties, legal basis, court, and full text, which is valuable beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each earning its place: first for purpose, second for return content and usage guidance. Front-loaded and no extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given an output schema exists, the description need not explain return values. It covers the purpose, prerequisite, and parameter origin. Complete for a simple one-param tool with rich annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description restates that notice_id is numeric and mentions the sibling tool for finding it, which adds slight context but doesn't significantly enhance understanding beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Fetch the full legal wording of a Gazette notice by numeric notice ID.' It specifies the verb, resource, and input. It distinguishes from sibling gazette_insolvency by positioning it as a prerequisite tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit instruction: 'Use gazette_insolvency first to find notice_numeric_id values.' This tells the agent the correct workflow. Missing explicit when-not-to-use or alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_promptAInspect
Get a prompt by name with optional arguments.
Returns the rendered prompt as JSON with a messages array. Arguments should be provided as a dict mapping argument names to values.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | The name of the prompt to get | |
| arguments | No | Optional arguments for the prompt |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It does disclose the output format ('JSON with a messages array'), but omits behavioral details like side effects, rate limits, or error handling. Adequate for a simple read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with purpose, no redundant information. Every sentence is meaningful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Has an output schema (though not detailed here), and description partially explains return format. For a tool with 2 params and no nested objects, the description is fairly complete, though missing notes on error scenarios.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters described in schema). The description adds value by specifying that arguments should be a dict mapping names to values, which goes beyond the schema's generic 'Optional arguments for the prompt'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'get', the resource 'prompt by name', and mentions optional arguments. It distinguishes from sibling 'list_prompts' which likely returns all prompts, while this retrieves one by name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., list_prompts). No mention of prerequisites, appropriate contexts, or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
land_title_searchSearch Price Paid Transactions by PostcodeARead-onlyIdempotentInspect
Search HM Land Registry Price Paid Index by postcode or address.
Returns up to 10 recent sale transactions for the postcode: price, date, address, property type, and tenure (Freehold/Leasehold). Covers England and Wales only. Postcode gives the most reliable results — a full address is also accepted and the postcode is extracted automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| address_or_postcode | Yes | UK property address or postcode. Postcode is most reliable: e.g. 'NG1 1AB'. Full address also accepted. |
Output Schema
| Name | Required | Description |
|---|---|---|
| total | Yes | Number of Price Paid transactions returned. Capped at 10 by the upstream SPARQL query. |
| postcode | Yes | Normalised UK postcode extracted from the input. |
| transactions | No | Recent Price Paid transactions for the postcode, sorted newest first. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide strong hints (readOnlyHint=true, destructiveHint=false, etc.), but the description adds valuable context beyond this: it specifies the geographic scope ('Covers England and Wales only'), lists the types of data returned (proprietor name, title class, tenure, price paid), and notes reliability tips ('Postcode is most reliable'). This enhances transparency without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific return details and scope. Every sentence adds value (e.g., data returned, geographic limitation, reliability tip) with zero waste, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (readOnlyHint, openWorldHint, etc.), 100% schema coverage, and the presence of an output schema (implied by context signals), the description is complete. It covers purpose, usage context, behavioral details, and scope without needing to explain parameters or return values, which are handled elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds minimal param semantics beyond the schema—it mentions 'address or postcode' and 'Postcode is most reliable', which slightly reinforces the schema but doesn't provide new syntax or format details. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search HM Land Registry'), resource ('property ownership data'), and scope ('by address or postcode'). It distinguishes this tool from sibling tools like charity_search or company_search by specifying it's for land registry data in England and Wales only, making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('by address or postcode'), and implies when not to use it (for non-property data like charities or companies, based on sibling tools). However, it doesn't explicitly name alternatives or state exclusions, such as when to use other property-related tools if they existed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_promptsAInspect
List all available prompts.
Returns JSON with prompt metadata including name, description, and optional arguments.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full behavioral burden. It discloses that the tool returns JSON with name, description, and optional arguments, but omits details like read-only nature, side effects, or performance characteristics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the verb 'List', and every word adds value. No redundant or extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters and an existing output schema, the description covers the essential return fields (name, description, optional arguments). It is sufficiently complete for an agent to understand the tool's purpose and output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so baseline is 4. The description adds no parameter info, but none is needed; it does mention 'optional arguments' in the output, which subtly indicates the output structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and the resource 'all available prompts', distinguishing it from sibling tools like 'get_prompt' which targets individual prompts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context for when to use the tool (listing all prompts) but does not explicitly exclude scenarios or mention alternatives like 'get_prompt' for individual prompt retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchSearch UK Due Diligence RegistersARead-onlyIdempotentInspect
Search across all UK due diligence registers simultaneously.
Searches Companies House, Charity Commission, disqualified directors, and Gazette insolvency notices in parallel. Returns a list of result IDs — use fetch with each ID to retrieve the full record.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Company name, charity name, director name, or keyword to search for across all UK due diligence registers |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, destructiveHint=false, idempotentHint=true, openWorldHint=true. The description adds that the search runs in parallel across registers and returns a list of result IDs, which complements the annotations without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short, clear sentences. Front-loaded with the main action and key registers, immediately followed by the return format and next steps. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter), presence of an output schema, and annotations, the description fully covers what the tool does, what it returns, and how to proceed (using fetch). Complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a description for 'query' that matches the tool description. The description does not add new semantic information beyond the schema; thus baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it searches across all UK due diligence registers simultaneously, listing specific registers (Companies House, Charity Commission, disqualified directors, Gazette insolvency notices). Distinguishes from sibling tools like company_search, charity_search, etc., which are single-register searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes the tool's broad search scope and provides a clear follow-up action: use fetch with each returned ID to get full records. Implicitly guides when to use this tool vs. single-register siblings, though not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vat_validateValidate UK VAT Number (HMRC)ARead-onlyIdempotentInspect
Validate a UK VAT number against the HMRC register.
Returns the trading name and address as registered with HMRC for VAT purposes. The VAT-registered trading address often differs from the Companies House registered address — that discrepancy is a due diligence signal worth noting.
| Name | Required | Description | Default |
|---|---|---|---|
| vat_number | Yes | UK VAT registration number. Accepts: 'GB123456789', '123456789', 'GB 123 456 789'. GB prefix and spaces normalised automatically. |
Output Schema
| Name | Required | Description |
|---|---|---|
| valid | Yes | True if HMRC confirmed the VAT number is currently registered. False means HMRC returned 404 (not registered / deregistered). |
| vat_number | Yes | Canonical VAT number in 'GB<9 digits>' format. |
| trading_name | No | Trading name registered with HMRC for VAT. Compare with the Companies House name — discrepancies are a due diligence signal. |
| registered_address | No | VAT-registered trading address. May differ from the Companies House registered office address. |
| consultation_number | No | HMRC consultation reference number for this lookup. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it specifies that the tool returns the trading name and address, notes that this address may differ from Companies House (a due diligence signal), and implies it performs normalization on VAT number formats. This enriches understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by additional context in two concise sentences. Every sentence adds value: the first states the action, the second specifies the return data, and the third provides important due diligence insight. There is no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (covering safety and idempotency), 100% schema description coverage, and the presence of an output schema (which handles return values), the description is complete enough. It adds context about address discrepancies and normalization behavior that complements the structured data, making it fully adequate for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed descriptions for both parameters (vat_number and response_format). The description does not add any meaningful semantic information beyond what the schema provides, such as explaining parameter interactions or usage nuances. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Validate a UK VAT number') and resource ('against the HMRC register'), distinguishing it from all sibling tools which focus on charities, companies, disqualified persons, insolvency, or land titles rather than VAT validation. It precisely identifies its unique domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to validate UK VAT numbers and retrieve registered details), but it does not explicitly mention when not to use it or name alternatives for similar validation tasks (e.g., if other tools exist for non-UK VAT). The context is well-defined but lacks explicit exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.