Skip to main content
Glama
Ownership verified

Server Details

UK due diligence — Companies House, Charity Commission, Land Registry, Gazette, HMRC VAT

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
paulieb89/uk-due-diligence-mcp
GitHub Stars
2
Server Listing
UK Due Diligence

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 16 of 16 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct resource or action (charity, company, disqualified director, Gazette, land title, VAT, prompts), with clear boundaries between search, profile, and fetch operations. No overlapping purposes.

Naming Consistency4/5

Most tools follow a noun_verb pattern (e.g., charity_profile, company_search), but a few use verb_noun (get_prompt, list_prompts) or single verbs (fetch, search). The inconsistency is minor and does not hinder readability.

Tool Count4/5

16 tools cover a broad domain (multiple UK registers) and each serves a clear purpose. Slightly on the higher end but well-scoped for the due diligence context.

Completeness4/5

Covers major UK registers (Companies House, Charity Commission, disqualified directors, Gazette, Land Registry, VAT) and includes a meta-search. Minor gaps exist (e.g., no sanctions or credit checks), but core due diligence workflows are supported.

Available Tools

16 tools
charity_profileGet Charity ProfileA
Read-onlyIdempotent
Inspect

Fetch the full Charity Commission profile for a charity number.

Returns trustees, latest income/expenditure, insolvency flags, governing document type, classifications, and countries of operation. Use charity_search first to find the charity number.

ParametersJSON Schema
NameRequiredDescriptionDefault
charity_numberYesCharity Commission registration number (e.g. '1234567'). Returned by charity_search.

Output Schema

ParametersJSON Schema
NameRequiredDescription
addressNoRegistered address of the charity (joined address lines).
insolventNoTrue if the charity is flagged as insolvent.
reg_statusNoRegistration status code ('R', 'RM').
charity_nameNoRegistered charity name.
charity_typeNoCharity type.
latest_incomeNoLatest filed annual income in GBP.
trustee_namesNoTrustees on record. Truncated to 30 entries.
charity_numberYesCharity registration number.
who_what_whereNoWho/What/Where classification entries. The list may be truncated truncated to 50 entries.
reg_status_labelNoHuman-readable registration status.
in_administrationNoTrue if the charity is in administration.
latest_expenditureNoLatest filed annual expenditure in GBP.
trustee_names_totalNoTotal trustees upstream before truncation.
date_of_registrationNoDate of first registration.
who_what_where_totalNoTotal classification entries upstream before truncation.
charity_co_reg_numberNoCompanies House number for charities also registered as companies (Charitable Incorporated Organisations, etc.).
countries_of_operationNoCountries the charity operates in (capped at 10 upstream).
trustee_names_truncatedNoTrue if the trustee list was truncated.
who_what_where_truncatedNoTrue if the classification list was truncated.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds valuable context about what specific data is returned (trustees, income/expenditure, filing history, etc.) and the tool's purpose for verification, which goes beyond the safety profile indicated by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core functionality, and the second explains the returned data and use cases. Every sentence earns its place with no wasted words, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that annotations cover safety aspects (read-only, non-destructive, idempotent), schema coverage is 100%, and an output schema exists, the description provides complete context. It explains what data is returned and the tool's utility, which complements the structured fields without redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters well-documented in the schema. The description does not add any additional parameter information beyond what the schema provides, such as format examples or constraints not already covered. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve'), resource ('Charity Commission profile for a registered charity'), and scope ('full' profile). It explicitly distinguishes this from sibling tools like 'charity_search' by focusing on detailed profile retrieval rather than search functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('verifying charitable status and governance quality'), but does not explicitly state when not to use it or name specific alternatives among the sibling tools. It implies usage for detailed profile retrieval rather than search operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

company_officersGet Company OfficersA
Read-onlyIdempotent
Inspect

Fetch active officers for a Companies House company number.

Returns directors, secretaries, and other active officers with appointment dates, nationality, and country of residence. Resigned officers are excluded. Pagination is handled internally — do NOT pass items_per_page or start_index; this tool takes only company_number.

ParametersJSON Schema
NameRequiredDescriptionDefault
start_indexNoIgnored — all officers are returned in one call.
company_numberYesCompanies House company number (8 digits, e.g. '03782379'). Returned by company_search.
items_per_pageNoIgnored — pagination is handled internally. Only accepted to avoid call failures.

Output Schema

ParametersJSON Schema
NameRequiredDescription
totalYesTotal officers returned (filtered by include_resigned).
officersNoOfficer records.
company_numberYesCompanies House company number.
include_resignedYesWhether resigned officers were included in this result.
high_appointment_count_flagNoNumber of active officers with 10+ total appointments, or null if appointment counts were not fetched. Non-zero values are a nominee/phoenix director risk signal.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations by explaining the risk flagging for directors with high appointment counts, which is a specific behavioral trait not captured in structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by additional context in a second sentence. Both sentences are information-dense with zero waste, efficiently covering functionality and risk insights without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (read-only query with risk analysis), rich annotations (covering safety and idempotency), 100% schema coverage, and the presence of an output schema, the description is complete enough. It explains the tool's purpose, data returned, and key behavioral insight (risk flagging), without needing to detail parameters or return values already documented elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters. The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining the significance of 'include_resigned' or 'response_format' choices. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List directors and officers'), the resource ('Companies House company number'), and distinguishes from siblings by focusing on officers rather than profiles, searches, or other company data. It provides a comprehensive scope of what information is returned.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it's for Companies House company numbers and mentions fraud detection as a use case. However, it doesn't explicitly state when to use this tool versus alternatives like 'company_profile' or 'company_search', which are sibling tools that might overlap in some contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

company_profileGet Company ProfileA
Read-onlyIdempotent
Inspect

Fetch the full Companies House profile for a company number.

Returns status, registered address, SIC codes, filing compliance (overdue accounts and confirmation statement flags), and whether the company has outstanding charges. Use company_search first to find the company number.

ParametersJSON Schema
NameRequiredDescriptionDefault
company_numberYesCompanies House company number (8 digits, e.g. '03782379'). Returned by company_search.

Output Schema

ParametersJSON Schema
NameRequiredDescription
accountsNoAccounts filing status and due dates.
sic_codesNoStandard Industrial Classification codes.
has_chargesNoTrue if the company has outstanding registered charges (secured debt), derived from the /charges endpoint. A due diligence signal.
company_nameNoRegistered company name.
company_typeNoCompanies House company type code.
company_numberYesCompanies House company number.
company_statusNoCurrent status (active, dissolved, in liquidation, etc.).
date_of_creationNoIncorporation date (ISO YYYY-MM-DD).
confirmation_statementNoConfirmation statement filing status and next due date.
registered_office_addressNoRegistered office address as returned by Companies House.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide read-only, non-destructive, idempotent, and open-world hints. The description adds valuable behavioral context beyond annotations: it specifies the exact data fields returned (status, address, SIC codes, etc.) and highlights business significance ('early distress signals' for overdue accounts/high charges), which helps the agent interpret outputs meaningfully.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core purpose and key return fields, and the second adds interpretive context. Every phrase adds value without redundancy, and it's front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations (covering safety and behavior), a rich output schema (implied by 'Has output schema: true'), and 100% schema coverage, the description provides complete contextual information. It details the return content and its business relevance, making it fully adequate for agent use without needing to explain parameters or output structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters (company_number format and response_format options). The description doesn't add parameter-specific details beyond implying company_number is the primary input, so it meets the baseline of 3 without compensating for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve'), resource ('Companies House profile'), and scope ('full profile') with explicit differentiation from siblings like company_search (which searches) and company_officers (which focuses on officers). It goes beyond the title by specifying the data source and scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'for a specific company number,' suggesting this tool is for known entities rather than discovery. However, it doesn't explicitly state when to use alternatives like company_search (for unknown companies) or company_officers (for officer details only), leaving some inference required.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

company_pscGet Persons with Significant ControlA
Read-onlyIdempotent
Inspect

Fetch Persons with Significant Control (beneficial ownership) for a company.

Returns PSC entries with natures of control, nationality, and country of residence. Flags overseas corporate PSC entries as a beneficial ownership risk signal. Returns an explanatory note for widely-held PLCs with no registrable PSC.

ParametersJSON Schema
NameRequiredDescriptionDefault
company_numberYesCompanies House company number (8 digits, e.g. '03782379'). Returned by company_search.

Output Schema

ParametersJSON Schema
NameRequiredDescription
pscNoPersons with Significant Control records.
noteNoExplanatory note when total=0. Typical for widely-held listed PLCs where no single person or entity holds 25%+ of shares or voting rights.
totalYesTotal PSC entries returned for this company.
company_numberYesCompanies House company number.
overseas_corporate_psc_flagNoNumber of corporate PSCs registered outside the UK. Non-zero values indicate an offshore beneficial ownership chain.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations by explaining what PSC data reveals (beneficial ownership thresholds and investigation flags). While annotations already declare readOnlyHint=true and other safety properties, the description provides domain-specific behavioral insights about what constitutes 'significant control' and investigation use cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear front-loading of the core purpose, followed by domain context that earns its place by explaining what PSC data represents and its investigative relevance. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, comprehensive annotations (readOnlyHint, openWorldHint, idempotentHint), 100% schema coverage, and the presence of an output schema, the description provides complete contextual understanding. It explains the domain significance of PSC data without needing to cover technical details already in structured fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without providing extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve') and resource ('Persons with Significant Control for a company'), distinguishing it from siblings like company_officers or company_profile. It provides domain-specific context about what PSC data represents, which helps differentiate its purpose from other company-related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by explaining that PSC data reveals beneficial ownership and is a key flag in investigations, suggesting when this tool would be relevant. However, it doesn't explicitly state when to use this tool versus alternatives like company_officers or provide explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

disqualified_profileGet Disqualified Director ProfileA
Read-onlyIdempotent
Inspect

Fetch the full disqualification record for a director by officer ID.

Returns all disqualification orders: reason, Act/section cited, disqualification period, and associated company names. Use disqualified_search first to find the officer ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
officer_idYesCompanies House officer ID. Returned by disqualified_search.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nameNoOfficer name.
surnameNoFamily name, if split upstream.
forenameNoGiven name, if split upstream.
officer_idYesCompanies House officer ID looked up.
nationalityNoDeclared nationality.
officer_kindYesWhich CH endpoint returned the record: 'natural' (individual) or 'corporate' (legal entity).
date_of_birthNoDate of birth on record.
disqualificationsNoAll disqualification orders attached to this officer.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond this by explaining the dual endpoint strategy (natural person then corporate officer) and specifying the source of officer_id, which enhances behavioral understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by return details and usage notes in subsequent sentences. Each sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving detailed disqualification records), the description is complete with purpose, usage context, and behavioral notes. Annotations cover safety and idempotency, and an output schema exists, so the description appropriately focuses on adding value without needing to explain return values or repeat structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters (officer_id and response_format). The description adds minimal semantics by mentioning officer_id comes from disqualified_search results, but this is already hinted in the schema. No additional parameter details are provided, so it meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'full disqualification record for a disqualified director', specifying it returns all disqualification orders with details like reason, Act and section, period, associated companies, and undertaking details. It distinguishes from sibling tools like 'disqualified_search' by focusing on retrieving detailed records rather than searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by stating 'The officer_id comes from the disqualified_search results' and mentions trying 'the natural person endpoint first, then the corporate officer endpoint', which guides when to use this tool. However, it does not explicitly state when not to use it or name alternatives beyond the implied disqualified_search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetchFetch Full Record from UK Due Diligence RegisterA
Read-onlyIdempotent
Inspect

Fetch the full record for an ID returned by search.

Routes by prefix to the appropriate register:

  • company:{number} → Companies House full profile

  • charity:{number} → Charity Commission full profile

  • disqualification:{officer_id} → Disqualified director full record

  • notice:{notice_id} → Gazette notice full legal text

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesPrefixed record ID returned by search. Format: company:{number}, charity:{number}, disqualification:{officer_id}, or notice:{notice_id}

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds transparency about routing behavior based on prefix, which is beyond what annotations provide. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise and front-loaded: one sentence for purpose, then bulleted list for routing. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description need not explain return values. It thoroughly covers routing logic for all prefix types, making it complete for its purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the parameter 'id' has a detailed description with format examples. The description complements by explaining how prefixes route to different registers, adding context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Fetch' and the resource 'full record for an ID returned by search'. It distinguishes from sibling tools by explaining routing by prefix to different registers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explains when to use (after search returns IDs) and provides routing logic for different prefixes. While it doesn't explicitly state when not to use, the context is clear. Alternatives are implicit in the routing but not named.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gazette_insolvencySearch Gazette Corporate Insolvency NoticesA
Read-onlyIdempotent
Inspect

Search The Gazette's insolvency notice index by entity name.

Searches the Gazette's insolvency endpoint which covers corporate notice codes: winding-up orders (2443), administration orders (2448), liquidator appointments (2452), striking-off notices (2460), and more. Results are sorted by severity — winding-up orders and administration orders appear first.

Each result includes a notice_numeric_id. Read the full legal wording via the notice://{notice_numeric_id} resource.

The Gazette is the official UK public record. A notice here means the event has been formally published and is legally effective.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_dateNoFilter notices up to this date (YYYY-MM-DD)
start_dateNoFilter notices from this date (YYYY-MM-DD)
entity_nameYesCompany or individual name to search for in Gazette insolvency notices
max_noticesNoCap on notices returned, applied after severity/date sort. Default 20. The Gazette insolvency feed returns up to 100 results per search — raise to 100 to see the full set.
notice_typeNoFilter by notice code (e.g. '2441' winding-up petition, '2443' winding-up order, '2448' administration order, '2460' striking-off). Omit to search all.

Output Schema

ParametersJSON Schema
NameRequiredDescription
noticesNoMatching notices, sorted by severity (desc) then date (desc).
end_dateNoUpper bound of the date range filter, if any.
start_dateNoLower bound of the date range filter, if any.
entity_nameYesEntity name that was searched.
total_noticesYesTotal notices returned after deduplication, sorting, and cap.
max_notices_capYesThe max_notices cap applied. Upstream may have more matching notices.
notice_type_filterNoNotice code filter applied, or null if all codes searched.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it explains that results are sorted by severity (winding-up orders first), notes The Gazette is the official UK public record, and clarifies that notices are legally effective. This enhances understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and concise, with four sentences that each add value: it states the purpose, specifies search scope, explains result sorting, and provides context about The Gazette. There is no wasted text, and information is front-loaded effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with filtering and sorting), rich annotations (read-only, open-world, etc.), 100% schema coverage, and the presence of an output schema, the description is complete enough. It covers purpose, scope, behavior, and legal context without needing to detail parameters or return values, which are handled elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter semantics beyond the schema, such as mentioning searches by entity name and notice codes 2441-2460, but does not provide additional details on parameter usage or interactions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches The Gazette's linked-data API for corporate insolvency notices, specifying the notice codes (2441-2460) and types of notices (winding-up petitions, administration orders, etc.). It distinguishes itself from siblings by focusing on insolvency notices rather than company profiles, charity data, or other searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: searching for corporate insolvency notices in the official UK public record. It implies usage by mentioning that results are sorted by severity and that notices are legally effective. However, it does not explicitly state when not to use it or name alternatives among sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

gazette_noticeGet Gazette Notice Full TextA
Read-onlyIdempotent
Inspect

Fetch the full legal wording of a Gazette notice by numeric notice ID.

Returns the complete JSON-LD linked-data record for the notice: parties, legal basis, court, and full text. Use gazette_insolvency first to find notice_numeric_id values.

ParametersJSON Schema
NameRequiredDescriptionDefault
notice_idYesNumeric Gazette notice ID. Returned as notice_numeric_id by gazette_insolvency.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, etc. Description adds that it returns a 'complete JSON-LD linked-data record' with parties, legal basis, court, and full text, which is valuable beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, each earning its place: first for purpose, second for return content and usage guidance. Front-loaded and no extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given an output schema exists, the description need not explain return values. It covers the purpose, prerequisite, and parameter origin. Complete for a simple one-param tool with rich annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description restates that notice_id is numeric and mentions the sibling tool for finding it, which adds slight context but doesn't significantly enhance understanding beyond the schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Fetch the full legal wording of a Gazette notice by numeric notice ID.' It specifies the verb, resource, and input. It distinguishes from sibling gazette_insolvency by positioning it as a prerequisite tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit instruction: 'Use gazette_insolvency first to find notice_numeric_id values.' This tells the agent the correct workflow. Missing explicit when-not-to-use or alternatives, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_promptAInspect

Get a prompt by name with optional arguments.

Returns the rendered prompt as JSON with a messages array. Arguments should be provided as a dict mapping argument names to values.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesThe name of the prompt to get
argumentsNoOptional arguments for the prompt

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. It does disclose the output format ('JSON with a messages array'), but omits behavioral details like side effects, rate limits, or error handling. Adequate for a simple read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose, no redundant information. Every sentence is meaningful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Has an output schema (though not detailed here), and description partially explains return format. For a tool with 2 params and no nested objects, the description is fairly complete, though missing notes on error scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters described in schema). The description adds value by specifying that arguments should be a dict mapping names to values, which goes beyond the schema's generic 'Optional arguments for the prompt'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'get', the resource 'prompt by name', and mentions optional arguments. It distinguishes from sibling 'list_prompts' which likely returns all prompts, while this retrieves one by name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives (e.g., list_prompts). No mention of prerequisites, appropriate contexts, or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_promptsAInspect

List all available prompts.

Returns JSON with prompt metadata including name, description, and optional arguments.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full behavioral burden. It discloses that the tool returns JSON with name, description, and optional arguments, but omits details like read-only nature, side effects, or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the verb 'List', and every word adds value. No redundant or extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with no parameters and an existing output schema, the description covers the essential return fields (name, description, optional arguments). It is sufficiently complete for an agent to understand the tool's purpose and output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters, so baseline is 4. The description adds no parameter info, but none is needed; it does mention 'optional arguments' in the output, which subtly indicates the output structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the resource 'all available prompts', distinguishing it from sibling tools like 'get_prompt' which targets individual prompts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use the tool (listing all prompts) but does not explicitly exclude scenarios or mention alternatives like 'get_prompt' for individual prompt retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vat_validateValidate UK VAT Number (HMRC)A
Read-onlyIdempotent
Inspect

Validate a UK VAT number against the HMRC register.

Returns the trading name and address as registered with HMRC for VAT purposes. The VAT-registered trading address often differs from the Companies House registered address — that discrepancy is a due diligence signal worth noting.

ParametersJSON Schema
NameRequiredDescriptionDefault
vat_numberYesUK VAT registration number. Accepts: 'GB123456789', '123456789', 'GB 123 456 789'. GB prefix and spaces normalised automatically.

Output Schema

ParametersJSON Schema
NameRequiredDescription
validYesTrue if HMRC confirmed the VAT number is currently registered. False means HMRC returned 404 (not registered / deregistered).
vat_numberYesCanonical VAT number in 'GB<9 digits>' format.
trading_nameNoTrading name registered with HMRC for VAT. Compare with the Companies House name — discrepancies are a due diligence signal.
registered_addressNoVAT-registered trading address. May differ from the Companies House registered office address.
consultation_numberNoHMRC consultation reference number for this lookup.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it specifies that the tool returns the trading name and address, notes that this address may differ from Companies House (a due diligence signal), and implies it performs normalization on VAT number formats. This enriches understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by additional context in two concise sentences. Every sentence adds value: the first states the action, the second specifies the return data, and the third provides important due diligence insight. There is no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (covering safety and idempotency), 100% schema description coverage, and the presence of an output schema (which handles return values), the description is complete enough. It adds context about address discrepancies and normalization behavior that complements the structured data, making it fully adequate for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with detailed descriptions for both parameters (vat_number and response_format). The description does not add any meaningful semantic information beyond what the schema provides, such as explaining parameter interactions or usage nuances. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Validate a UK VAT number') and resource ('against the HMRC register'), distinguishing it from all sibling tools which focus on charities, companies, disqualified persons, insolvency, or land titles rather than VAT validation. It precisely identifies its unique domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to validate UK VAT numbers and retrieve registered details), but it does not explicitly mention when not to use it or name alternatives for similar validation tasks (e.g., if other tools exist for non-UK VAT). The context is well-defined but lacks explicit exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.