Skip to main content
Glama
Ownership verified

Server Details

Unmodified live data from 27 national registries. UBO chain walker + 10 MCP prompts.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
sophymarine/openregistry
GitHub Stars
4
Server Listing
openregistry

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.6/5 across 10 of 10 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: document retrieval, company profile, document metadata, document navigation, officers, shareholders, filings listing, jurisdiction reference, company search, and officer search. There is no overlap or ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern in snake_case (e.g., fetch_document, get_company_profile, search_companies). No mixed conventions or deviations.

Tool Count5/5

With 10 tools, the server covers a comprehensive set of operations for company registry access without being excessive. Each tool serves a necessary function, and the count is appropriate for the domain.

Completeness4/5

The tool surface is nearly complete for read-only registry operations, covering company search, profile, filings, documents (with navigation), officers, and shareholders. However, the mention of 'get_charges' in list_filings description but absence as a standalone tool indicates a minor gap.

Available Tools

10 tools
fetch_documentFetch documentA
Read-onlyIdempotent
Inspect

Read a filing's content by document_id (from list_filings). Filing metadata alone doesn't answer most questions — the numbers and text live inside the document.

RESPONSE SHAPES: • kind='embedded' (under max_bytes ≈ 20 MB) — returns full bytes_base64, source_url_official (evergreen registry URL), and source_url_direct (short-TTL signed proxy URL). PDFs render as a document block you can read natively. • kind='resource_link' (oversized) — NO bytes_base64. Returns reason, next_steps, both source URLs, and index_preview {page_count, text_layer, outline_present}. Use get_document_navigation to locate pages, then re-call this tool with pages='N-M' and format='pdf'|'text'|'png' for the content.

CRITICAL: if this tool fails (rate limit, 5xx, timeout), do NOT fill in names / numbers / dates from memory — tell the user what failed and offer retry or source_url_official. Outline titles, previews, and snippets from navigation tools are for LOCATING pages, never for quoting.

source_url_official is auto-resolved from the most recent list_filings call; the optional company_id / transaction_id / filing_type / filing_description inputs are overrides for the rare case where document_id didn't come through list_filings.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass R2 cache. Filings are immutable; rarely needed.
formatNoPreferred content type: application/xhtml+xml, application/pdf, application/xml, application/json. Omit to let the adapter pick the most structured option (XHTML > XML > JSON > PDF).
max_bytesNoInline-size cutoff. Default ~20 MB. Documents above this return as `kind='resource_link'` — call `get_document_navigation` for them.
company_idNoOverride; auto-resolved from list_filings side-cache.
document_idYesDocument ID from list_filings; do not synthesize (composite IDs will 404).
filing_typeNoOverride; auto-resolved from list_filings side-cache.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
transaction_idNoOverride; auto-resolved from list_filings side-cache.
filing_descriptionNoOverride; auto-resolved from list_filings side-cache.

Output Schema

ParametersJSON Schema
NameRequiredDescription
pagesNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
size_bytesNo
source_urlNo
document_idNo
bytes_base64No
jurisdictionNo
chosen_formatNo
available_formatsNo
bytes_omitted_reasonNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds substantial behavioral context beyond annotations: it explains the two response shapes (embedded vs resource_link), details workflow steps for oversized documents, provides critical rules about citation practices and error handling, describes caching behavior and auto-resolution mechanisms, and explains jurisdiction-specific considerations. While annotations cover basic safety (readOnlyHint, idempotentHint), the description adds rich operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Response Shapes, Workflow, Critical Rules) and uses bullet points effectively. While comprehensive, it maintains focus on essential information - every sentence serves a clear purpose in guiding tool usage. The front-loaded statement about being the 'Primary tool for reading a filing's content' immediately establishes purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, no output schema), the description provides exceptional completeness: it covers response formats, error handling, workflow integration with sibling tools, jurisdiction considerations, caching behavior, and practical usage constraints. The description fully compensates for the lack of output schema by detailing what the tool returns and how to interpret different response types.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 89% schema description coverage, the baseline would be 3, but the description adds meaningful context about parameter usage: it explains the primary use of document_id (from list_filings/get_financials), clarifies that override parameters are for 'rare use' cases, provides practical guidance on format selection ('recommended — XHTML > XML > JSON > PDF'), and explains the practical implications of max_bytes settings. This adds significant value beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states this is the 'Primary tool for reading a filing's content' and distinguishes it from sibling tools by explaining that filing metadata alone is insufficient - the actual content requires this tool. It clearly identifies the verb (reading/fetching) and resource (filing documents).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides comprehensive usage guidance: it specifies when to use this tool ('MANDATORY for any substantive answer'), explains the workflow for different response types, distinguishes when to use sibling tools like 'fetch_document_pages' and 'get_document_navigation', and explicitly states when NOT to use certain approaches ('Don't reflexively retry with a larger max_bytes').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_company_profileCompany profileA
Read-onlyIdempotent
Inspect

Fetch the structured profile of a company by its registry-specific ID. Returns unified top-level fields (company_id, company_name, status, status_detail, incorporation_date, registered_address) plus raw upstream fields under jurisdiction_data. status is a coarse active/inactive/dissolved/unknown enum; status_detail keeps the registry's native string. registered_address is a flat string; the upstream nested form (when present) stays in jurisdiction_data.

Does not bundle officers / shareholders / filings / charges — call those tools separately. ID format varies per registry; pull company_id from search_companies rather than guessing. For per-country ID format and the full jurisdiction_data field catalogue call list_jurisdictions({jurisdiction:'<CC>'}).

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache; call upstream directly.
includeNoOptional per-country extra fetches; ignored where unsupported.
company_idYesRegistry-specific identifier. Examples: GB '00445790' (8-digit Companies House number, or 'SC123456' for Scotland / 'NI...' / 'OC...' / 'LP...'); NO '923609016' (9-digit); AU 11-digit ABN or 9-digit ACN; FR 9-digit SIREN or 14-digit SIRET; PL 10-digit KRS; CZ 8-digit IČO; FI Y-tunnus '0112038-9'. Call list_jurisdictions for the full per-country format.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusNoFour-value unified status safe for cross-jurisdiction comparison.
company_idNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
company_nameNo
jurisdictionNo
status_detailNo
jurisdiction_dataNoFull original response fields from the upstream registry, field names unchanged. Shape is jurisdiction-specific - see `list_jurisdictions({ jurisdiction: '<CODE>' })`.
incorporation_dateNo
registered_addressNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations. While annotations declare read-only, non-destructive, and idempotent operations, the description details the return structure (unified fields plus jurisdiction_data), explains status field semantics, describes caching behavior ('fresh: true bypasses the cache'), and mentions performance implications of optional flags ('slower', 'doubles upstream calls'). No contradictions with annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and efficiently packed with information. It front-loads the core purpose, then details return values, exclusions, input guidance, optional flags, and per-country caveats. While comprehensive, some sentences are lengthy, and the density of information might slightly reduce immediate clarity, though every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema, rich annotations), the description is largely complete. It covers purpose, usage, return structure, exclusions, input guidance, and jurisdictional nuances. However, without an output schema, it could more explicitly detail the full return format or error conditions, though the annotations provide safety context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds meaningful context beyond the schema: it explains the variability of company_id formats, provides guidance on obtaining company_id from 'search_companies', clarifies that optional flags are jurisdiction-specific and ignored elsewhere, and explains the purpose of the fresh parameter. However, it doesn't fully detail all parameter interactions or edge cases.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetch the full profile of a company by its registry-specific ID.' It specifies the verb ('fetch'), resource ('company profile'), and key identifier ('registry-specific ID'), and distinguishes it from siblings by explicitly listing what it does NOT include (filings, officers, etc.) and naming alternative tools for those purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states what the tool does NOT include and names specific sibling tools for those purposes (e.g., 'list_filings', 'get_officers'). It also advises pulling company_id from 'search_companies' rather than guessing, and directs users to 'list_jurisdictions' for per-country details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_document_metadataDocument metadataA
Read-onlyIdempotent
Inspect

Retrieve metadata for a filing document by document_id (from list_filings). Returns available content formats with byte sizes, page count, source URL, creation date. Raw upstream fields preserved under jurisdiction_data. Call this before fetch_document when a document may be large or its format is unknown.

Do NOT construct or guess document_id — some registries use composite IDs that must come from list_filings. Synthesized IDs will 404. Empty available_formats means the body is paywalled or unavailable upstream. Unsupported jurisdictions return 501.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache. Filings are immutable; rarely needed.
document_idYesDocument ID from a previous list_filings call; do not synthesize.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
pagesNo
created_atNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
source_urlNo
document_idNo
jurisdictionNo
available_formatsNo
size_bytes_by_formatNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it explains error conditions (404/502 for synthesized IDs, 501 for paywalled/unsupported jurisdictions), availability caveats (available_formats may be empty), and the purpose of checking metadata before downloading large documents. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear paragraphs: first states purpose and returns, second provides usage guidance, third gives critical warnings, and fourth covers edge cases. Every sentence adds value without redundancy, and key points are front-loaded (e.g., the warning about not constructing IDs is emphasized).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (metadata retrieval with jurisdiction-specific behaviors), the description is highly complete. It covers purpose, usage, prerequisites, error cases, and relationships to other tools. While there's no output schema, the description details return values (formats, sizes, page count, etc.). The annotations provide safety context, and the description fills in behavioral nuances adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67% (2 of 3 parameters have descriptions). The description adds meaningful context for document_id beyond the schema's 'Document ID from a previous list_filings call' by warning against constructing IDs and explaining composite ID formats. It also clarifies jurisdiction usage by referencing list_jurisdictions for details. However, it doesn't explicitly address the 'fresh' parameter's semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('retrieve metadata') and resource ('filing document'), distinguishing it from sibling tools like fetch_document (which downloads content) and list_filings (which lists documents). It explicitly mentions what metadata is returned (content formats, byte sizes, page count, source URL, creation date, jurisdiction_data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('call this before fetch_document when the document might be large or you don't yet know the format') and when not to use it ('do NOT construct or guess document_id values'). It names specific alternatives (list_filings for obtaining IDs, fetch_document for downloading, list_jurisdictions for jurisdiction details).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_document_navigationDocument navigationA
Read-onlyIdempotent
Inspect

Return a navigation index for a cached document: PDF outline / bookmarks, per-page text previews (~200 chars each), keyword-matched landmarks (balance sheet / directors report / auditor report), text-layer classification, and source URLs.

Call this FIRST for PDFs too large to fit in a single document block (fetch_document returned kind='resource_link'). Use the outline / previews / landmarks to pick a page range, then re-call fetch_document with pages='N-M' for the authoritative content.

Navigation aids only: page previews, outline titles, landmark matches, and snippets may be truncated or contain OCR errors. NEVER cite them as source material for figures, quotes, dates, or names — always quote from a subsequent fetch_document page-range fetch. Requires the document bytes to already be cached via fetch_document.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoRe-run pdfjs against the cached bytes (does not re-pull from upstream).
company_idNoOverride; auto-resolved from list_filings side-cache.
document_idYesDocument ID from list_filings; document must already be cached via fetch_document.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
transaction_idNoOverride; auto-resolved from list_filings side-cache.

Output Schema

ParametersJSON Schema
NameRequiredDescription
pagesNo
outlineNo
headingsNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
document_idNo
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context: navigation aids may be truncated, contain OCR errors, or have false positives, and it explains the caching requirement and the 'fresh' parameter's effect. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections: purpose, returns, critical warnings, and prerequisites. It's front-loaded with key information, though it could be slightly more concise by integrating some details (e.g., jurisdiction support) more tightly. Every sentence adds value, but minor redundancy exists.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (navigation for large documents) and lack of output schema, the description is mostly complete: it explains purpose, usage, behavioral caveats, and prerequisites. However, it doesn't detail the exact structure of returned navigation data (e.g., format of outlines or landmarks), which could aid agent interpretation. Annotations cover safety aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 80%, so the schema documents most parameters well. The description adds context for 'jurisdiction' by mentioning it's for official government sources and referencing 'list_jurisdictions' for details, but doesn't explain other parameters like 'document_id' or 'fresh' beyond what the schema provides. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Open the navigation index for a cached document' and specifies it returns 'outline (PDF bookmarks), per-page text previews, keyword-matched landmarks, text_layer classification, and source URLs.' It distinguishes from siblings like 'fetch_document_pages' by emphasizing this is for navigation aids only, not authoritative content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: call this FIRST for large PDFs or when locating sections, NEVER cite navigation aids as source material, and always follow up with 'fetch_document_pages' for authoritative content. It also specifies prerequisites: requires cached document bytes and to call 'fetch_document' first for new documents, clearly differentiating from alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_officersOfficersA
Read-onlyIdempotent
Inspect

Return a company's officers — current directors, secretaries, members, partners, board members, procurists, liquidators, plus historical resignations by default. Each officer has a unified shape (officer_id, name, role, appointed_on, resigned_on, is_active) plus raw upstream fields in jurisdiction_data. Role labels pass through in the registry's native language (e.g. Styremedlem, Předseda představenstva, Président); translate client-side. Birth-date precision varies by registry.

Officer-ID stability varies: corporate officers usually carry the corporate's own company_id; natural persons may carry a synthetic index. Some registries mask names under GDPR — that masking is upstream. Jurisdictions without an officer feed return 501.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache; refetch from upstream.
company_idYesRegistry company ID, from search_companies.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
group_by_personNoCZ only. Dedupe the same person across consecutive appointments (board member → chair → vice-chair) into one entry; appointments list under `jurisdiction_data._appointments[]`. Default false.
include_resignedNoInclude resigned officers. Default true; set false for currently-serving only.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNoAdapter returns a bare array; textResult() wraps under `data`.
itemsNo
officersNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
next_cursorNo
total_countNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it explains GDPR masking, jurisdiction-specific limitations (e.g., birth-date precision, 501 gating), cache bypass with 'fresh: true', and how flags are ignored on unsupported registries. While annotations cover read-only, open-world, and idempotent hints, the description enriches this with practical constraints and data source details without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with core functionality, followed by details on data shape, flags, and caveats. While comprehensive, it remains focused with minimal redundancy, though some sentences could be slightly tightened (e.g., the per-country caveats paragraph is dense but informative).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no output schema, rich annotations), the description is highly complete: it covers purpose, usage, data format, parameter semantics, jurisdiction-specific behaviors, and links to other tools for further details. It addresses gaps from missing output schema by describing the unified shape of returned officers and potential errors like 501.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With schema description coverage at 60%, the description compensates by explaining parameter implications in detail: it clarifies the default and effect of 'include_resigned', specifies that 'group_by_person' is for CZ only, and notes that 'fresh' bypasses cache. It also adds context for 'jurisdiction' and 'company_id' by linking to other tools for support details, enhancing understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return the officers of a company' with specific details about what constitutes an officer (directors, secretaries, members, etc.) and distinguishes it from sibling tools like 'get_officer_appointments' by explaining the relationship between them. It goes beyond a simple list to explain the unified shape of returned data and cross-company tracing capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: it mentions using 'get_officer_appointments' for cross-company tracing with officer_id, and directs users to 'list_jurisdictions' for per-country caveats and support details. It also explains when certain flags are applicable (e.g., 'group_by_person' for CZ only) and when jurisdictions return 501 errors.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_shareholdersShareholdersA
Read-onlyIdempotent
Inspect

Return the shareholders / members / quota-holders — the legal-statutory equity roster published by the registry, no ownership-threshold filter. Use this for any shareholder / member / quota-holder question.

Shareholders are a DIFFERENT concept from beneficial owners (PSC / UBO), who appear on a separate register only when above a statutory control threshold (typically >25%). The two can disagree (a 10% shareholder is on the members register but not the PSC register; a corporate trustee can be a PSC without appearing on the members register).

Disclosure is legal-form-conditional: private-limited / LLC forms typically expose quota-holders in the public register; joint-stock / public-limited forms keep shareholders in a private book, so this tool may return an empty list, a pointer to the relevant filing (use fetch_document on the returned document_id), or a statutory explanation. Every response includes a disclosure flag and/or note. Raw upstream fields preserved in jurisdiction_data. Unsupported jurisdictions return 501.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache; refetch from upstream.
company_idYesRegistry company ID, from search_companies.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNo
as_ofNo
itemsNo
company_idNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
total_countNo
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, open-world, idempotent, and non-destructive hints. The description adds valuable context beyond annotations: it explains that responses may vary by jurisdiction (e.g., empty lists for joint-stock companies, structured arrays or document pointers), includes disclosure flags, mentions caching behavior with `fresh: true`, and notes that unsupported jurisdictions return 501. It also references per-country caveats and how to access them via other tools, though it doesn't detail rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., 'When to call this tool,' public disclosure details, per-country caveats) and uses bold for emphasis. It is appropriately sized for the tool's complexity, though some sentences are lengthy. Every sentence adds value, such as explaining jurisdictional variations and tool interactions, with minimal redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (legal-registry data with jurisdictional variations), no output schema, and rich annotations, the description is highly complete. It covers purpose, usage guidelines, behavioral nuances (e.g., response shapes, caching, error codes), parameter context, and references to other tools for further details. It adequately compensates for the lack of output schema by describing possible response types and flags.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (33%), with only the 'jurisdiction' parameter described in the schema. The description compensates by explaining the purpose of 'fresh' (bypasses cache) and implying 'company_id' identifies the target company. It also adds context about jurisdiction codes (ISO 3166-1 alpha-2) and references `list_jurisdictions` for details, though it doesn't fully document all parameter formats or constraints beyond what's implied.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns 'the shareholders / members / quota-holders of a company' from the 'legal-statutory equity roster published by the company registry,' with explicit scope ('no ownership-threshold filter'). It distinguishes from sibling `get_persons_with_significant_control` by explaining the different registers and concepts, making the purpose specific and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (e.g., for 'shareholders', 'members', 'quota-holders' in various languages) and when not to use it (e.g., not for PSC/beneficial owners unless explicitly asked). It names the alternative tool (`get_persons_with_significant_control`) and explains the conceptual differences, including examples of disagreement between registers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_filingsFiling historyA
Read-onlyIdempotent
Inspect

Return a company's filing history, newest first. Each filing has filing_id, filing_date, category, description, and (when upstream exposes one) a document_id that round-trips to get_document_metadata / fetch_document. Raw upstream fields preserved under jurisdiction_data.

Filter via the optional category. Common normalized values: 'accounts', 'annual-return', 'capital', 'charges', 'confirmation-statement', 'incorporation', 'insolvency', 'liquidation', 'mortgage', 'officers', 'resolution'. Native upstream form codes also accepted.

This tool returns metadata only — call fetch_document on document_id for the actual filing bytes. has_document=false means the body is paywalled or unavailable upstream. Pagination uses limit (default 25, max 1000) plus cursor (GB) or offset (IE). Unsupported jurisdictions return 501; call list_jurisdictions for per-country category values and pagination style.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache; refetch from upstream.
limitNoItems per page. Default 25.
cursorNoOpaque pagination cursor returned as `next_cursor` (GB). Omit for first page.
offsetNoPagination offset (IE).
categoryNoOptional category filter. Use a normalized value or the registry's native form code. Call `list_jurisdictions({jurisdiction:'<CC>'})` for the accepted values per country.
company_idYesRegistry-specific company ID. IE accepts an optional '/B' suffix for the business-name register.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
next_cursorNo
total_countNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds substantial behavioral context beyond what annotations provide. While annotations declare readOnlyHint=true, idempotentHint=true, etc., the description details pagination behavior (limit default 25, max 1000, cursor vs offset pagination), jurisdiction limitations (unsupported jurisdictions return 501), document availability constraints (has_document flag, paywalled bodies), and sorting order (newest-first). This provides crucial operational context that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized for a complex tool with 7 parameters. It's front-loaded with core functionality, then addresses filtering, pagination, and jurisdiction caveats. While comprehensive, every sentence earns its place by providing necessary operational context. Minor deduction for some density, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, jurisdiction variations, pagination differences) and lack of output schema, the description provides excellent completeness. It covers return format, filtering options, pagination mechanisms, jurisdiction limitations, error conditions, and relationships to sibling tools. The guidance to call list_jurisdictions for per-country details appropriately delegates complexity while maintaining completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 86% schema description coverage, the baseline would be 3, but the description adds significant value beyond the schema. It explains the relationship between category filtering and jurisdiction-specific codes, clarifies pagination behavior (cursor vs offset by jurisdiction), and provides context about company_id suffixes and jurisdiction support. The description compensates for the 14% schema coverage gap with practical usage guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return a company's filing history' with specific details about what each filing contains (filing_id, filing_date, category, description, document_id, jurisdiction_data) and that results are newest-first. It distinguishes from siblings like get_document_metadata and fetch_document by explaining the relationship, and from other list/search tools by focusing specifically on filings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it mentions using the optional 'category' parameter to filter, explains when to use get_document_metadata/fetch_document for document retrieval, and explicitly states that unsupported jurisdictions return 501 with the alternative to call list_jurisdictions for per-country caveats. It also distinguishes from sibling tools by explaining the document_id relationship.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_jurisdictionsCountry and tool matrixA
Read-onlyIdempotent
Inspect

Per-country reference. Pass EXACTLY ONE of: • jurisdiction='GB' — registry name + URL, data license, company-ID format with examples, native-to-unified status enum mapping, and the list of tools supported. • supports_tool='get_officers' — which jurisdictions implement a given tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
jurisdictionNoISO 3166-1 alpha-2 country code (case-insensitive; CA subdivisions hyphenated like 'CA-BC'). Returns the full per-country schema. Mutually exclusive with `supports_tool`.
supports_toolNoTool name (e.g. 'get_officers', 'get_shareholders'). Returns the matrix of which jurisdictions implement this tool. Mutually exclusive with `jurisdiction`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
hintNo
toolNoPopulated in cross-country support-matrix mode: echoes the tool name that was queried.
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNoPopulated in single-country mode: carries the JurisdictionMetadata for the requested country.
supported_inNo
supported_countNo
not_supported_inNo
not_supported_countNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: the two distinct response shapes, the 400 error for no parameters, and the case-insensitive handling of jurisdiction codes. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient and well-structured: first sentence establishes purpose, bullet points clearly explain the two modes, and final sentences cover error cases and sibling differentiation. Every sentence earns its place with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (two distinct modes with different return shapes), the description provides excellent context about what information is returned in each mode. However, without an output schema, the description doesn't fully document the return structure details. The annotations provide good safety coverage, making this mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds some semantic context about what each parameter triggers (full schema vs. cross-country matrix) and provides example values, but doesn't add syntax or format details beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'Per-country reference data dictionary' with two specific modes: full schema for one country or cross-country matrix for one tool. It distinguishes from sibling 'about' by specifying this tool is for jurisdiction-specific reference data while 'about' is for server-level info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: 'pass EXACTLY ONE of' the two parameters, with clear examples of each mode. It explicitly states when NOT to use this tool ('For server-level info... call `about` instead') and provides the consequence of incorrect usage ('Calling with no parameters returns a structured 400').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companiesSearch companiesA
Read-onlyIdempotent
Inspect

Search a national company registry by name or keyword. Pass EXACTLY ONE of: • jurisdiction='GB' - single country, direct. • jurisdictions=['GB','NO','FR'] - multi-country when you're unsure; the server asks the user to confirm (clients with MCP elicitation) or errors back asking you to ask in chat. Per-tier cap on distinct countries per call: anonymous=3, pro=10, max=30, enterprise=unlimited.

Returns candidates with unified fields (company_id, company_name, status, incorporation_date, registered_address) plus raw upstream jurisdiction_data. For country-specific filters (FR ca_min, CZ czNace, CH canton, etc.) pass the filters object — call list_jurisdictions for the per-country schema.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache; call upstream directly.
limitNoMax candidates to return (1-250). Default 10.
queryNoCompany name or keyword. May be empty for FR / IE when searching purely by structured `filters`. AU also accepts structured `key:value` pairs in this field (e.g. 'postcode:2000 type:PUB active:Y').
offsetNoPagination offset (IE / FR).
filtersNoCountry-specific advanced filters. Flat object keyed by the upstream field name (e.g. FR `code_postal` / `ca_min`, CZ `czNace`, CH `canton`, FI `companyForm`, IE `alpha`, IS `vat_number`). Call `list_jurisdictions({jurisdiction:'<CC>'})` for the per-country schema.
jurisdictionNoISO 3166-1 alpha-2 country code (uppercase; CA subdivisions hyphenated, e.g. 'CA-BC'). Use this when one country is known. Mutually exclusive with `jurisdictions`.
jurisdictionsNoArray of ISO codes when the country is uncertain. The server asks the user to confirm the list (clients with MCP elicitation) or returns an error so you can ask in chat. Mutually exclusive with `jurisdiction`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countNo
queryNo
resultsNoCandidate list (single-country key).
cached_atNo
candidatesNoCandidate list (multi-country fan-out key).
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNoSingle-country mode.
jurisdictionsNoMulti-country fan-out mode.
partial_failuresNo
per_jurisdictionNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, idempotentHint=true, etc., but the description adds crucial behavioral context: per-tier caps on country searches (anonymous=3, pro=10, etc.), confirmation dialogs for multi-jurisdiction mode, error handling for unsupported clients, cache bypass via 'fresh' parameter, and detailed output field explanations (unified fields vs. jurisdiction_data). It also notes that follow-up tools don't count against caps, enhancing operational understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (calling modes, caps, preferences, returns, caveats) and uses bullet-like formatting for readability. It's appropriately detailed for a complex tool but could be slightly more concise by reducing some repetitive explanations (e.g., the confirmation dialog is mentioned multiple times). Every sentence adds value, but minor trimming is possible.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's high complexity (62 parameters, no output schema), the description provides comprehensive context: it explains the two main calling modes, tier-based caps, confirmation behavior, return fields, per-country caveats, and references to other tools for further details. It compensates for the lack of output schema by describing the return structure and status field semantics, making it complete for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the two primary modes (jurisdiction vs. jurisdictions) and their semantics, which aren't fully captured in the schema's individual parameter descriptions. It also hints at parameter interactions (e.g., query may be empty for FR/IE with structured filters) and directs to list_jurisdictions for per-country details, though it doesn't detail all 62 parameters individually.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches company registries by name or keyword, specifying two distinct calling modes (single vs. multi-jurisdiction). It distinguishes itself from siblings like search_companies_near_point by focusing on registry search rather than geographic proximity, and from get_company_profile by being a search rather than a direct lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use each mode: use jurisdiction (singular) when the user names a specific country, and jurisdictions (plural) when uncertain. It warns to prefer the singular mode when in doubt and explicitly mentions follow-up tools (get_company_profile, list_filings, etc.) that don't count against caps, helping differentiate from alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_officersSearch officers by nameA
Read-onlyIdempotent
Inspect

Find people holding or who held officer positions (director, secretary, member, partner) in a jurisdiction's registry by name. Returns candidates with officer_id, name, and (where exposed) appointment count. Entry point for person-centric investigations.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax officer candidates to return. Range 1-100, default 20.
queryYesOfficer name. Full names work best ('John Smith'). Partial names return more candidates.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNoAdapters returning a bare array are wrapped here by textResult().
countNo
queryNo
officersNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare this as read-only, open-world, idempotent, and non-destructive. The description adds valuable behavioral context beyond annotations: it explains the investigative workflow (using officer_id with get_officer_appointments), describes partial name matching behavior, and clarifies what data is returned (officer_id, name, number of appointments where available). It doesn't mention rate limits or authentication needs, but adds meaningful operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with three sentences that each earn their place: first states the core functionality, second explains the output and next-step workflow, third provides strategic context about investigation patterns. No wasted words, front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations, and lack of output schema, the description provides excellent completeness. It covers purpose, usage patterns, behavioral context, and workflow integration. The annotations handle safety and idempotency, while the description adds investigative context and output interpretation guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 67% schema description coverage, the description adds meaningful context beyond the schema. While the schema documents the parameters technically, the description explains the investigative purpose of the 'query' parameter and the relationship between this search and subsequent lookups using officer_id. It doesn't provide additional syntax details for parameters, but adds strategic context about how parameters fit into workflows.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Find people who hold or have held officer positions'), resource ('registry for company officers'), and scope ('by name'). It explicitly distinguishes this tool from its sibling 'get_officer_appointments' by explaining the relationship between them, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('entry point for "follow the human, not the company" investigations') and when to use an alternative ('Use the officer_id in get_officer_appointments to retrieve every company that person has been appointed to'). It also mentions jurisdictional constraints through the input schema, though not directly in the description text.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.