UK Due Diligence
Server Details
UK due diligence — Companies House, Charity Commission, Land Registry, Gazette, HMRC VAT
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- paulieb89/uk-due-diligence-mcp
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 11 of 11 tools scored.
Every tool has a clearly distinct purpose targeting specific UK due diligence data sources: Charity Commission, Companies House, disqualified directors, The Gazette, Land Registry, and HMRC VAT. There is no overlap in functionality—each tool serves a unique verification or search need with clear boundaries.
Tool names follow a consistent pattern of 'resource_action' (e.g., charity_search, company_profile, disqualified_search) using snake_case throughout. This predictable naming makes it easy to understand what each tool does and maintains uniformity across all 11 tools.
With 11 tools, the server is well-scoped for UK due diligence, covering key areas like charity verification, company checks, director disqualifications, insolvency notices, property ownership, and VAT validation. Each tool earns its place by addressing a distinct aspect of due diligence without being excessive or insufficient.
The tool set provides comprehensive coverage for UK due diligence, including CRUD-like operations (search and retrieve profiles) across multiple official registries. There are no obvious gaps—agents can verify entities, check risks, and investigate ownership across charities, companies, directors, insolvency, property, and VAT in a cohesive workflow.
Available Tools
11 toolscharity_profileGet Full Charity ProfileARead-onlyIdempotentInspect
Retrieve the full Charity Commission profile for a registered charity.
Returns trustees, income/expenditure, filing history, governing document type, area of operation, and beneficiary description. Useful for verifying charitable status and governance quality. Trustee and classification lists are capped via max_trustees and max_classifications to keep responses bounded.
| Name | Required | Description | Default |
|---|---|---|---|
| max_trustees | No | Cap on the number of trustees returned. Prolific charities have 50+ trustees on file. Default 30. | |
| charity_number | Yes | Charity Commission registration number, e.g. '1234567' | |
| max_classifications | No | Cap on the number of Who/What/Where classification entries returned. Large charities have 100+. Default 50. |
Output Schema
| Name | Required | Description |
|---|---|---|
| raw | No | Full raw Charity Commission profile payload for any field not surfaced explicitly on this model. |
| address | No | Registered address of the charity (joined address lines). |
| insolvent | No | True if the charity is flagged as insolvent. |
| reg_status | No | Registration status code ('R', 'RM'). |
| charity_name | No | Registered charity name. |
| charity_type | No | Charity type. |
| latest_income | No | Latest filed annual income in GBP. |
| trustee_names | No | Trustees on record. The list may be truncated per the `max_trustees` input. |
| charity_number | Yes | Charity registration number. |
| who_what_where | No | Who/What/Where classification entries. The list may be truncated per the `max_classifications` input. |
| reg_status_label | No | Human-readable registration status. |
| in_administration | No | True if the charity is in administration. |
| latest_expenditure | No | Latest filed annual expenditure in GBP. |
| trustee_names_total | No | Total trustees upstream before truncation. |
| date_of_registration | No | Date of first registration. |
| who_what_where_total | No | Total classification entries upstream before truncation. |
| charity_co_reg_number | No | Companies House number for charities also registered as companies (Charitable Incorporated Organisations, etc.). |
| countries_of_operation | No | Countries the charity operates in (capped at 10 upstream). |
| trustee_names_truncated | No | True if the trustee list was truncated. |
| who_what_where_truncated | No | True if the classification list was truncated. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds valuable context about what specific data is returned (trustees, income/expenditure, filing history, etc.) and the tool's purpose for verification, which goes beyond the safety profile indicated by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core functionality, and the second explains the returned data and use cases. Every sentence earns its place with no wasted words, making it easy to scan and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that annotations cover safety aspects (read-only, non-destructive, idempotent), schema coverage is 100%, and an output schema exists, the description provides complete context. It explains what data is returned and the tool's utility, which complements the structured fields without redundancy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters well-documented in the schema. The description does not add any additional parameter information beyond what the schema provides, such as format examples or constraints not already covered. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve'), resource ('Charity Commission profile for a registered charity'), and scope ('full' profile). It explicitly distinguishes this from sibling tools like 'charity_search' by focusing on detailed profile retrieval rather than search functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('verifying charitable status and governance quality'), but does not explicitly state when not to use it or name specific alternatives among the sibling tools. It implies usage for detailed profile retrieval rather than search operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
charity_searchSearch Charity Commission RegisterARead-onlyIdempotentInspect
Search the Charity Commission register of England and Wales by name or keyword.
Returns matching charities with registration number, status, and
registration date. Use charity_profile for full details once you
have the charity number. The upstream searchCharityName endpoint
returns the full list in one shot — pagination is applied
client-side via offset/limit.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max items to return in this page. Default 20; raise to 100 for bulk views. | |
| query | Yes | Charity name or keyword to search for | |
| offset | No | Number of items to skip before this page. Default 0. |
Output Schema
| Name | Required | Description |
|---|---|---|
| limit | Yes | Max items requested for this page. |
| query | Yes | Search term applied. |
| total | Yes | Total matches returned by upstream. |
| offset | Yes | Number of items skipped before this page (client-side). |
| has_more | Yes | True if more items may exist beyond this page. Re-call with offset=offset+returned to continue. |
| returned | Yes | Items actually returned on this page. |
| charities | No | Matching charity records. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations: it specifies what information is returned ('registration number, status, and activities summary') and clarifies the relationship with the charity_profile tool. While annotations cover safety (readOnlyHint=true, destructiveHint=false) and behavior (openWorldHint=true, idempotentHint=true), the description adds practical usage context without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with only two sentences that both earn their place. The first sentence states the purpose and scope, while the second provides crucial workflow guidance. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and the existence of an output schema, the description provides exactly what's needed: clear purpose, sibling differentiation, and workflow guidance. It doesn't need to explain return values since an output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all parameters are already documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3. It doesn't compensate for schema gaps because there are none to compensate for.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search the Charity Commission register'), the resource ('Charity Commission register of England and Wales'), and the method ('by name or keyword'). It distinguishes from its sibling 'charity_profile' by specifying that search returns basic information while charity_profile provides full details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when-to-use guidance: 'Use charity_profile for full details once you have the charity number.' This clearly distinguishes between this search tool and its sibling profile tool, providing a clear workflow for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_officersList Company OfficersARead-onlyIdempotentInspect
List directors and officers for a Companies House company number.
Returns names, roles, appointment dates, nationality, and total
appointment count. Directors with a high appointment count
(>=10 other companies) are flagged via
high_appointment_count_flag — a common trait in nominee director
fraud and phoenix company structures.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max officers to fetch from Companies House (upstream items_per_page). Default 100. | |
| company_number | Yes | Companies House company number | |
| include_resigned | No | If true, include resigned officers alongside active ones |
Output Schema
| Name | Required | Description |
|---|---|---|
| total | Yes | Total officers returned (filtered by include_resigned). |
| officers | No | Officer records. |
| company_number | Yes | Companies House company number. |
| include_resigned | Yes | Whether resigned officers were included in this result. |
| high_appointment_count_flag | No | Number of active officers with 10+ total appointments. Non-zero values are a nominee/phoenix director risk signal. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations by explaining the risk flagging for directors with high appointment counts, which is a specific behavioral trait not captured in structured annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by additional context in a second sentence. Both sentences are information-dense with zero waste, efficiently covering functionality and risk insights without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (read-only query with risk analysis), rich annotations (covering safety and idempotency), 100% schema coverage, and the presence of an output schema, the description is complete enough. It explains the tool's purpose, data returned, and key behavioral insight (risk flagging), without needing to detail parameters or return values already documented elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all three parameters. The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining the significance of 'include_resigned' or 'response_format' choices. Baseline 3 is appropriate when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List directors and officers'), the resource ('Companies House company number'), and distinguishes from siblings by focusing on officers rather than profiles, searches, or other company data. It provides a comprehensive scope of what information is returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying it's for Companies House company numbers and mentions fraud detection as a use case. However, it doesn't explicitly state when to use this tool versus alternatives like 'company_profile' or 'company_search', which are sibling tools that might overlap in some contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_profileGet Full Company ProfileARead-onlyIdempotentInspect
Retrieve the full Companies House profile for a specific company number.
Returns corporate status, registered address, SIC codes, accounts and confirmation statement filing status (with overdue flags), active-charges flag, and incorporation date. Accounts overdue and active charges are early distress signals worth cross-referencing with gazette_insolvency.
| Name | Required | Description | Default |
|---|---|---|---|
| company_number | Yes | Companies House company number, e.g. '12345678' or 'SC123456' |
Output Schema
| Name | Required | Description |
|---|---|---|
| raw | No | Full raw Companies House profile payload. Use for any field not surfaced explicitly on this model. |
| accounts | No | Accounts filing status and due dates. |
| sic_codes | No | Standard Industrial Classification codes. |
| has_charges | No | True if the company has active registered charges (secured debt). A due diligence signal. |
| company_name | No | Registered company name. |
| company_type | No | Companies House company type code. |
| company_number | Yes | Companies House company number. |
| company_status | No | Current status (active, dissolved, in liquidation, etc.). |
| date_of_creation | No | Incorporation date (ISO YYYY-MM-DD). |
| confirmation_statement | No | Confirmation statement filing status and next due date. |
| registered_office_address | No | Registered office address as returned by Companies House. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide read-only, non-destructive, idempotent, and open-world hints. The description adds valuable behavioral context beyond annotations: it specifies the exact data fields returned (status, address, SIC codes, etc.) and highlights business significance ('early distress signals' for overdue accounts/high charges), which helps the agent interpret outputs meaningfully.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core purpose and key return fields, and the second adds interpretive context. Every phrase adds value without redundancy, and it's front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of annotations (covering safety and behavior), a rich output schema (implied by 'Has output schema: true'), and 100% schema coverage, the description provides complete contextual information. It details the return content and its business relevance, making it fully adequate for agent use without needing to explain parameters or output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both parameters (company_number format and response_format options). The description doesn't add parameter-specific details beyond implying company_number is the primary input, so it meets the baseline of 3 without compensating for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve'), resource ('Companies House profile'), and scope ('full profile') with explicit differentiation from siblings like company_search (which searches) and company_officers (which focuses on officers). It goes beyond the title by specifying the data source and scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'for a specific company number,' suggesting this tool is for known entities rather than discovery. However, it doesn't explicitly state when to use alternatives like company_search (for unknown companies) or company_officers (for officer details only), leaving some inference required.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_pscGet Persons with Significant ControlARead-onlyIdempotentInspect
Retrieve Persons with Significant Control (PSC) for a company.
PSC data reveals beneficial ownership — individuals or corporate
entities holding >25% shares, voting rights, or appointment power.
Corporate PSC entries with overseas registration addresses are a
key flag in beneficial ownership investigations and surface as
overseas_corporate_psc_flag on the response.
| Name | Required | Description | Default |
|---|---|---|---|
| company_number | Yes | Companies House company number | |
| max_nature_chars | No | Per-entry cap on each 'nature of control' descriptor. Upstream entries are sometimes long legal text. Default 300. |
Output Schema
| Name | Required | Description |
|---|---|---|
| psc | No | Persons with Significant Control records. |
| total | Yes | Total PSC entries returned for this company. |
| company_number | Yes | Companies House company number. |
| overseas_corporate_psc_flag | No | Number of corporate PSCs registered outside the UK. Non-zero values indicate an offshore beneficial ownership chain. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable context beyond annotations by explaining what PSC data reveals (beneficial ownership thresholds and investigation flags). While annotations already declare readOnlyHint=true and other safety properties, the description provides domain-specific behavioral insights about what constitutes 'significant control' and investigation use cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear front-loading of the core purpose, followed by domain context that earns its place by explaining what PSC data represents and its investigative relevance. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, comprehensive annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and the presence of an output schema, the description provides complete contextual understanding. It explains the domain significance of PSC data without needing to cover technical details already in structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without providing extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve') and resource ('Persons with Significant Control for a company'), distinguishing it from siblings like company_officers or company_profile. It provides domain-specific context about what PSC data represents, which helps differentiate its purpose from other company-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by explaining that PSC data reveals beneficial ownership and is a key flag in investigations, suggesting when this tool would be relevant. However, it doesn't explicitly state when to use this tool versus alternatives like company_officers or provide explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
company_searchSearch Companies HouseARead-onlyIdempotentInspect
Search the Companies House register by company name or keyword.
Returns a paginated list of matching companies with name, number, status, SIC codes, incorporation date, and registered address. Use company_profile for the full record once you have the company number. Re-call with start_index=start_index+items_per_page to fetch the next page.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Company name or keyword to search for | |
| start_index | No | Pagination offset. Default 0. | |
| company_type | No | Filter by company type (e.g. 'ltd', 'llp'). Omit to search all. | |
| company_status | No | Filter by company status (e.g. 'active', 'dissolved'). Omit to search all. | |
| items_per_page | No | Number of results to return (max 100). Default 20. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | Matching companies. Use the `company_number` field to call company_profile, company_officers, or company_psc for full detail. |
| query | Yes | The query string that was searched. |
| has_more | Yes | True if more results exist beyond this page. Re-call with start_index=start_index+items_per_page to fetch the next page. |
| returned | Yes | Number of items actually returned on this page. |
| start_index | Yes | Number of results skipped before this page (upstream start_index). |
| total_results | Yes | Total matching companies in Companies House (server-side). |
| items_per_page | Yes | Page size requested from the API for this call. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it discloses the paginated nature of results, specifies the exact fields returned in search results, and mentions the response format options. While annotations cover safety (readOnlyHint=true, destructiveHint=false), the description provides operational details not captured in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with two focused sentences: the first explains the tool's purpose and scope, the second provides clear usage guidance. Every word earns its place with zero redundancy or wasted space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the comprehensive annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and existence of an output schema, the description provides exactly what's needed: clear purpose, differentiation from siblings, and key behavioral details about pagination and result fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema already fully documents all 6 parameters. The description mentions 'company name or keyword' which aligns with the query parameter but doesn't add meaningful semantic context beyond what's already in the well-documented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search the Companies House register'), resource ('companies'), and scope ('by company name or keyword'). It distinguishes from sibling tools like 'company_profile' by indicating this is a search tool rather than a detailed profile tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when-to-use guidance: 'Use company_profile for the full record once you have the company number.' This clearly distinguishes this search tool from its sibling profile tool and provides a clear workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
disqualified_profileDisqualified Director ProfileARead-onlyIdempotentInspect
Get the full disqualification record for a disqualified director.
Returns all disqualification orders: reason, Act and section, period, associated companies, and undertaking details. The officer_id comes from the disqualified_search results. Tries the natural person endpoint first, then the corporate officer endpoint.
| Name | Required | Description | Default |
|---|---|---|---|
| officer_id | Yes | Officer ID from disqualified_search results | |
| max_companies | No | Per-order cap on the `company_names[]` array. Prolific disqualified directors are attached to 20+ companies per order. Default 20. |
Output Schema
| Name | Required | Description |
|---|---|---|
| name | No | Officer name. |
| surname | No | Family name, if split upstream. |
| forename | No | Given name, if split upstream. |
| officer_id | Yes | Companies House officer ID looked up. |
| nationality | No | Declared nationality. |
| officer_kind | Yes | Which CH endpoint returned the record: 'natural' (individual) or 'corporate' (legal entity). |
| date_of_birth | No | Date of birth on record. |
| disqualifications | No | All disqualification orders attached to this officer. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond this by explaining the dual endpoint strategy (natural person then corporate officer) and specifying the source of officer_id, which enhances behavioral understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by return details and usage notes in subsequent sentences. Each sentence adds value without redundancy, making it efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (retrieving detailed disqualification records), the description is complete with purpose, usage context, and behavioral notes. Annotations cover safety and idempotency, and an output schema exists, so the description appropriately focuses on adding value without needing to explain return values or repeat structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (officer_id and response_format). The description adds minimal semantics by mentioning officer_id comes from disqualified_search results, but this is already hinted in the schema. No additional parameter details are provided, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'full disqualification record for a disqualified director', specifying it returns all disqualification orders with details like reason, Act and section, period, associated companies, and undertaking details. It distinguishes from sibling tools like 'disqualified_search' by focusing on retrieving detailed records rather than searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by stating 'The officer_id comes from the disqualified_search results' and mentions trying 'the natural person endpoint first, then the corporate officer endpoint', which guides when to use this tool. However, it does not explicitly state when not to use it or name alternatives beyond the implied disqualified_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
disqualified_searchSearch Disqualified DirectorsARead-onlyIdempotentInspect
Search Companies House for disqualified directors by name.
Returns names, dates of birth, disqualification period snippets, and officer IDs that can be used with disqualified_profile for full details. Use this to check whether an individual has been disqualified from acting as a company director in the UK.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Name of the person to search for | |
| start_index | No | Pagination offset (0-based). Default 0. | |
| items_per_page | No | Results per page (max 100). Default 20. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | Matching disqualified officer records. |
| query | Yes | Search query applied. |
| has_more | Yes | True if more items may exist beyond this page. Re-call with start_index=start_index+items_per_page to continue. |
| returned | Yes | Items actually returned on this page. |
| start_index | Yes | Pagination offset for this page. |
| total_results | Yes | Total matching records upstream at Companies House. |
| items_per_page | Yes | Page size requested. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies that results include names, dates of birth, disqualification periods, and officer IDs, and that officer IDs can be used with 'disqualified_profile'. Annotations cover safety (readOnlyHint, destructiveHint) and operational traits (openWorldHint, idempotentHint), but the description enhances this by detailing the return structure and inter-tool workflow, without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by return details and usage guidelines in two additional sentences. Every sentence adds value: the first defines the tool, the second specifies outputs and links to another tool, and the third provides clear usage context. There is no wasted text, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a search operation with pagination and format options), rich annotations (readOnlyHint, openWorldHint, etc.), and the presence of an output schema, the description is complete. It covers purpose, outputs, usage context, and inter-tool relationships without needing to detail parameters or return values, which are handled by structured fields. This provides sufficient context for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents all parameters (query, start_index, items_per_page, response_format). The description does not add any parameter-specific semantics beyond the schema, such as explaining search behavior or format implications. It only implies a name-based search, which is already covered in the schema's query description. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search Companies House for disqualified directors by name') and resources ('disqualified directors'), distinguishing it from siblings like 'company_search' or 'charity_search' by focusing specifically on disqualified directors. It explicitly mentions the UK context and the ability to check disqualification status, making the purpose unambiguous and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use this to check whether an individual has been disqualified from acting as a company director in the UK.' It also mentions an alternative tool ('disqualified_profile') for full details, indicating when to switch to another tool. This covers both primary use cases and alternatives clearly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gazette_insolvencySearch Gazette Corporate Insolvency NoticesARead-onlyIdempotentInspect
Search The Gazette's linked-data API for corporate insolvency notices.
Searches notice codes 2441-2460 (winding-up petitions, administration orders, liquidation appointments, striking-off notices, etc.) by entity name. Results are sorted by severity — winding-up orders and administration orders appear first.
The Gazette is the official UK public record. A notice here means the event has been formally published and is legally effective.
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | No | Filter notices up to this date (YYYY-MM-DD) | |
| start_date | No | Filter notices from this date (YYYY-MM-DD) | |
| entity_name | Yes | Company or individual name to search for in Gazette insolvency notices | |
| notice_type | No | Filter by notice code (e.g. '2441' winding-up petition, '2443' winding-up order, '2448' administration order, '2460' striking-off). Omit to search all. | |
| max_content_chars | No | Per-notice cap on the free-text `content` field. Default 500 keeps responses bounded; raise for notices where the full legal wording matters. |
Output Schema
| Name | Required | Description |
|---|---|---|
| notices | No | Matching notices, sorted by severity (desc) then date (desc). |
| end_date | No | Upper bound of the date range filter, if any. |
| start_date | No | Lower bound of the date range filter, if any. |
| entity_name | Yes | Entity name that was searched. |
| total_notices | Yes | Total notices returned after deduplication and sorting. |
| notice_type_filter | No | Notice code filter applied, or null if all codes searched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it explains that results are sorted by severity (winding-up orders first), notes The Gazette is the official UK public record, and clarifies that notices are legally effective. This enhances understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with four sentences that each add value: it states the purpose, specifies search scope, explains result sorting, and provides context about The Gazette. There is no wasted text, and information is front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with filtering and sorting), rich annotations (read-only, open-world, etc.), 100% schema coverage, and the presence of an output schema, the description is complete enough. It covers purpose, scope, behavior, and legal context without needing to detail parameters or return values, which are handled elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter semantics beyond the schema, such as mentioning searches by entity name and notice codes 2441-2460, but does not provide additional details on parameter usage or interactions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches The Gazette's linked-data API for corporate insolvency notices, specifying the notice codes (2441-2460) and types of notices (winding-up petitions, administration orders, etc.). It distinguishes itself from siblings by focusing on insolvency notices rather than company profiles, charity data, or other searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: searching for corporate insolvency notices in the official UK public record. It implies usage by mentioning that results are sorted by severity and that notices are legally effective. However, it does not explicitly state when not to use it or name alternatives among sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
land_title_searchSearch HMLR Land Registry TitleARead-onlyIdempotentInspect
Search HM Land Registry for property ownership data by address or postcode.
Returns registered proprietor name, title class (absolute/qualified/ possessory), tenure (freehold/leasehold), and recent price paid transactions. Covers England and Wales only. Price paid transactions are hard-capped at 10 upstream.
| Name | Required | Description | Default |
|---|---|---|---|
| address_or_postcode | Yes | UK property address or postcode. Postcode is most reliable: e.g. 'NG1 1AB'. Full address also accepted. |
Output Schema
| Name | Required | Description |
|---|---|---|
| total | Yes | Number of Price Paid transactions returned. Capped at 10 by the upstream SPARQL query. |
| postcode | Yes | Normalised UK postcode extracted from the input. |
| title_data | No | Title ownership data from the HMLR title endpoint. Currently always empty — the free title endpoint does not return data for most lookups. |
| transactions | No | Recent Price Paid transactions for the postcode, sorted newest first. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide strong hints (readOnlyHint=true, destructiveHint=false, etc.), but the description adds valuable context beyond this: it specifies the geographic scope ('Covers England and Wales only'), lists the types of data returned (proprietor name, title class, tenure, price paid), and notes reliability tips ('Postcode is most reliable'). This enhances transparency without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific return details and scope. Every sentence adds value (e.g., data returned, geographic limitation, reliability tip) with zero waste, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (readOnlyHint, openWorldHint, etc.), 100% schema coverage, and the presence of an output schema (implied by context signals), the description is complete. It covers purpose, usage context, behavioral details, and scope without needing to explain parameters or return values, which are handled elsewhere.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters. The description adds minimal param semantics beyond the schema—it mentions 'address or postcode' and 'Postcode is most reliable', which slightly reinforces the schema but doesn't provide new syntax or format details. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search HM Land Registry'), resource ('property ownership data'), and scope ('by address or postcode'). It distinguishes this tool from sibling tools like charity_search or company_search by specifying it's for land registry data in England and Wales only, making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('by address or postcode'), and implies when not to use it (for non-property data like charities or companies, based on sibling tools). However, it doesn't explicitly name alternatives or state exclusions, such as when to use other property-related tools if they existed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
vat_validateValidate UK VAT Number (HMRC)ARead-onlyIdempotentInspect
Validate a UK VAT number against the HMRC register.
Returns the trading name and address as registered with HMRC for VAT purposes. The VAT-registered trading address often differs from the Companies House registered address — that discrepancy is a due diligence signal worth noting.
| Name | Required | Description | Default |
|---|---|---|---|
| vat_number | Yes | UK VAT registration number. Accepts: 'GB123456789', '123456789', 'GB 123 456 789'. GB prefix and spaces normalised automatically. |
Output Schema
| Name | Required | Description |
|---|---|---|
| valid | Yes | True if HMRC confirmed the VAT number is currently registered. False means HMRC returned 404 (not registered / deregistered). |
| vat_number | Yes | Canonical VAT number in 'GB<9 digits>' format. |
| trading_name | No | Trading name registered with HMRC for VAT. Compare with the Companies House name — discrepancies are a due diligence signal. |
| registered_address | No | VAT-registered trading address. May differ from the Companies House registered office address. |
| consultation_number | No | HMRC consultation reference number for this lookup. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it specifies that the tool returns the trading name and address, notes that this address may differ from Companies House (a due diligence signal), and implies it performs normalization on VAT number formats. This enriches understanding without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by additional context in two concise sentences. Every sentence adds value: the first states the action, the second specifies the return data, and the third provides important due diligence insight. There is no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations (covering safety and idempotency), 100% schema description coverage, and the presence of an output schema (which handles return values), the description is complete enough. It adds context about address discrepancies and normalization behavior that complements the structured data, making it fully adequate for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with detailed descriptions for both parameters (vat_number and response_format). The description does not add any meaningful semantic information beyond what the schema provides, such as explaining parameter interactions or usage nuances. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Validate a UK VAT number') and resource ('against the HMRC register'), distinguishing it from all sibling tools which focus on charities, companies, disqualified persons, insolvency, or land titles rather than VAT validation. It precisely identifies its unique domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (to validate UK VAT numbers and retrieve registered details), but it does not explicitly mention when not to use it or name alternatives for similar validation tasks (e.g., if other tools exist for non-UK VAT). The context is well-defined but lacks explicit exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!