Skip to main content
Glama

Server Details

Unmodified government company data from 27 registries, live. Cross-border UBO chain walker.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
sophymarine/openregistry
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.6/5 across 27 of 27 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between get_financials and list_filings (category='accounts'), and between get_shareholders and get_persons_with_significant_control, though their descriptions clarify the differences. The document navigation tools (fetch_document, get_document_navigation, search_document) are well-differentiated but could be confusing without careful reading.

Naming Consistency4/5

Tool names follow a consistent snake_case pattern with clear verb_noun structure (e.g., get_company_profile, list_filings, search_companies). Minor deviations exist, such as about (noun instead of verb_noun) and list_actos_inscritos (Spanish term), but overall naming is predictable and readable.

Tool Count3/5

With 27 tools, the count feels heavy for a company registry server, though the domain is broad (multiple jurisdictions, filings, officers, documents). Some tools are jurisdiction-specific (e.g., list_actos_inscritos for Spain, search_addresses for Czechia), which may justify the number, but it could overwhelm agents with niche or overlapping functionality.

Completeness5/5

The tool set provides comprehensive coverage for company registry operations, including search, profile retrieval, filings, officers, shareholders, charges, documents, and jurisdiction-specific data. There are no obvious gaps; tools like list_jurisdictions and about support metadata and discovery, ensuring agents can navigate the domain effectively.

Available Tools

27 tools
aboutAbout this serverA
Read-onlyIdempotent
Inspect

Compact self-description (default response <1KB): server name, version, list of supported jurisdiction codes, list of tool names, pricing, rate limits. Pass section to expand a specific slice — 'principles', 'tools', 'data_licenses', 'jurisdictions' (compact capability map for every registered adapter), or 'jurisdiction' + jurisdiction (full metadata for one country). For the full per-jurisdiction schema (field lists, status mappings, ID formats, notes), prefer list_jurisdictions.

ParametersJSON Schema
NameRequiredDescriptionDefault
sectionNoOptional section to expand. Omit for the compact default envelope. Use 'jurisdiction' together with the `jurisdiction` parameter to get one country's full metadata.
jurisdictionNoISO 3166-1 alpha-2 country code. Required only when section='jurisdiction'.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nameNo
toolsNo
taglineNo
versionNo
principlesNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
fanout_capsNo
rate_limitsNo
tools_countNo
data_licensesNo
jurisdictionsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the default response size (<1KB), mentions 'pricing' and 'rate limits' as included information, and explains the relationship between 'jurisdiction' section and parameter. While annotations cover safety (readOnlyHint, destructiveHint), the description provides operational details about response format and parameter interactions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: first sentence establishes the core purpose and default behavior, second explains parameter usage with clear examples, third provides an alternative recommendation. Every sentence adds value with zero redundancy, and key information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's informational nature, rich annotations (readOnlyHint, openWorldHint, idempotentHint), and complete schema coverage, the description provides excellent contextual completeness. It covers purpose, usage patterns, parameter interactions, output characteristics (size constraint), and even directs to alternatives when deeper jurisdiction data is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3, but the description adds meaningful context: it explains that omitting parameters yields the 'compact default envelope,' clarifies that 'jurisdiction' parameter is 'required only when section='jurisdiction',' and provides the practical purpose of expanding 'specific slices' of information. This goes beyond the schema's technical specifications.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool provides a 'compact self-description' of the server with specific content (server name, version, jurisdiction codes, tool names, pricing, rate limits). It clearly distinguishes this from sibling tools by being the only meta-information tool, while all others perform data operations like fetching company records or searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Omit for the compact default envelope' for basic info, and specifies when to use the optional parameters - 'Pass `section` to expand a specific slice' with enumerated options. It also gives a clear alternative: 'For the full per-jurisdiction schema... prefer list_jurisdictions,' directly naming a sibling tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_name_availabilityCheck whether a proposed company name is available (IM only)A
Read-onlyIdempotent
Inspect

Probe the Isle of Man Companies Registry 'Check Name Availability' endpoint (companynameavailability.iom). Returns { query, available, warning, similar_names[] } where available is true only when upstream does not emit the 'Name entered already exists' warning AND the similar-names table is empty. Each similar_names row carries the exact name, company number, registry type, status, and (when upstream linked it) the opaque Id of the existing company. Pricing: free; no login required. Other jurisdictions return 501.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache.
sort_byNoOptional upstream sort column for the similar-names table.
company_nameYesThe proposed company name to test, e.g. 'Manx Padel Ltd'.
jurisdictionYes'IM' only.
sort_directionNoOptional sort direction: 0 = ascending, 1 = descending. Defaults to 0 when sort_by is set.

Output Schema

ParametersJSON Schema
NameRequiredDescription
queryNo
reasonNo
availableNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNo
similar_namesNo
jurisdiction_dataNoFull original response fields from the upstream registry, field names unchanged. Shape is jurisdiction-specific — see `list_jurisdictions({ jurisdiction: '<CODE>' })`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the return structure ('Returns { query, available, warning, similar_names[] }'), defines availability logic ('available is true only when upstream does not emit the 'Name entered already exists' warning AND the similar-names table is empty'), and notes jurisdiction limitations ('Other jurisdictions return 501'). Annotations cover read-only, open-world, idempotent, and non-destructive traits, but the description enriches this with specific endpoint behavior and constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return details, availability logic, and operational notes. Each sentence adds value: endpoint specification, return structure, availability criteria, pricing/login info, and jurisdiction limits. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (read-only, open-world, idempotent, non-destructive), and 100% schema coverage, the description is complete. It explains the endpoint, return format, availability logic, jurisdiction scope, and cost/access details. No output schema exists, but the description adequately covers return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description does not add significant parameter semantics beyond what the schema provides, though it implies the tool's focus on 'company_name' and 'jurisdiction' as core inputs. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Probe the Isle of Man Companies Registry 'Check Name Availability' endpoint' to check 'whether a proposed company name is available'. It specifies the jurisdiction ('IM only') and distinguishes it from siblings by focusing on name availability rather than searching, counting, or fetching company data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage guidelines: it specifies when to use ('Check whether a proposed company name is available'), when not to use ('Other jurisdictions return 501'), and alternatives implicitly (e.g., use search_companies for broader queries). It also notes prerequisites: 'no login required' and 'Pricing: free'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

count_companiesCount companies matching a name/filter (IE only)A
Read-onlyIdempotent
Inspect

Return the total number of companies that would match a search, without fetching the candidates themselves. Useful before paginating very large result sets to decide whether to narrow the query.

⚠️ Performance note: this is NOT cheaper than search_companies — CRO's /companycount endpoint runs the same underlying query and takes ~2s on average (similar to a full search). Only use it when the raw count is what you actually need (e.g. 'how many Coffee business names exist in Ireland?'). For 'is this query narrow enough to paginate?', it's faster to call search_companies with limit=1 — you'll get the first hit AND a sense of recall in one round-trip.

── IE (Ireland CRO) ── Maps to the /companycount endpoint. Supports the same filters as search_companies (query, match_type, bus_ind, include_business_names, address, alpha). Returns a plain integer. Pricing: free.

Other jurisdictions return 501 — Companies House/Brreg/ABR don't expose a count-only endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
alphaNo
queryNoCompany name or keyword. May be empty when combined with address/alpha filters.
addressNo
bus_indNo
match_typeNo
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
include_business_namesNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
countNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond what annotations provide. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description adds: performance characteristics (~2s average, similar to full search), pricing (free), jurisdiction-specific behavior (IE only, others return 501), endpoint mapping (/companycount), and return format (plain integer). This provides practical implementation details not captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear sections: purpose statement, performance warning with specific guidance, jurisdiction details, and parameter mapping. Every sentence adds value—no redundant information. The use of emojis and formatting (⚠️, ──) enhances readability without adding fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, jurisdiction-specific behavior, performance considerations) and lack of output schema, the description provides comprehensive context. It covers purpose, usage scenarios, performance trade-offs, jurisdiction limitations, endpoint mapping, pricing, return type, and parameter relationships. This makes the tool's behavior and constraints fully understandable to an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 29% schema description coverage, the description compensates well by explaining parameter context. It notes that parameters 'support the same filters as search_companies' and lists them (query, match_type, bus_ind, include_business_names, address, alpha), providing semantic meaning beyond the bare schema. However, it doesn't explain individual parameter purposes or constraints in detail, keeping it at a 4 rather than 5.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return the total number of companies that would match a search, without fetching the candidates themselves.' It specifies the verb ('return'), resource ('companies'), and scope ('matching a name/filter'), and distinguishes it from sibling tools like search_companies by emphasizing it only provides a count, not the actual results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states it's 'useful before paginating very large result sets' and gives a specific example ('how many Coffee business names exist in Ireland?'). It also explicitly advises against using it for performance reasons in some cases, recommending 'search_companies with limit=1' instead for checking query narrowness, and notes jurisdiction limitations (IE only, others return 501).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetch_documentFetch a filing document. Small = inlined bytes; oversized = resource_link + navigation tools.A
Read-onlyIdempotent
Inspect

Primary tool for reading a filing's content. Pass a document_id from list_filings / get_financials. MANDATORY for any substantive answer — filing metadata (dates, form codes, descriptions) alone doesn't answer the user; the numbers and text live inside the document.

── RESPONSE SHAPES ── • kind='embedded' (PDF up to ~20 MB; structured text up to max_bytes): returns bytes_base64 with the full document, source_url_official (evergreen registry URL for citation, auto-resolved), and source_url_direct (short-TTL signed proxy URL). For PDFs the host converts bytes into a document content block — you read it natively including scans. • kind='resource_link' (document exceeds max_bytes): NO bytes_base64. Returns reason, next_steps, the two source URLs, plus index_preview for PDFs ({page_count, text_layer, outline_present, index_status}). Use the navigation tools below.

── WORKFLOW FOR kind='resource_link' ──

  1. Read index_preview.text_layer. Values: full (every page has real text), partial (mixed), none (scanned / image-only), oversized_skipped (indexing skipped), encrypted / failed.

  2. If full / partial: call get_document_navigation (outline + previews + landmarks) and/or search_document to locate pages. If none / oversized_skipped: skip search.

  3. Call fetch_document_pages(pages='N-M', format='pdf'|'text'|'png') to get actual content. Prefer pdf for citations, text for skim, png for scanned or oversized.

── CRITICAL RULES ── • Navigation-aids-only: previews, snippets, landmark matches, and outline titles returned by the navigation tools are for LOCATING pages. NEVER cite them as source material — quote only from fetch_document_pages output or this tool's inline bytes. • No fallback to memory: if this tool fails (rate limit, 5xx, disconnect), do NOT fill in names / numbers / dates from training data. Tell the user what failed and offer retry or source_url_official. • Don't reflexively retry with a larger max_bytes — for big PDFs the bytes are unreadable to you anyway. Use the navigation tools instead.

source_url_official is auto-resolved from a session-side cache populated by the most recent list_filings call. The optional company_id / transaction_id / filing_type / filing_description inputs are OVERRIDES for the rare case where document_id didn't come through list_filings. Per-country document availability, format, and pricing — call list_jurisdictions({jurisdiction:"<code>"}).

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoSet true to bypass the R2 cache and re-fetch from upstream. Use sparingly — CH filings are immutable, the cache is safe.
formatNoOptional preferred content type. Common: application/xhtml+xml, application/pdf, application/xml, application/json. Omit to let the adapter choose the most structured format available (recommended — XHTML > XML > JSON > PDF).
max_bytesNoOptional inline-size cutoff. Defaults to ~20 MB. Documents above this come back as kind='resource_link' (use navigation tools). Raising this is NOT the right way to read a big PDF — use fetch_document_pages instead.
company_idNoOVERRIDE (rare use). Normally auto-resolved from the list_filings side-cache. Only pass this when invoking fetch_document on a document_id that did NOT come through list_filings in this session.
document_idYes
filing_typeNoOVERRIDE (rare use). Normally auto-resolved. Pass only to override the cached value.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
transaction_idNoOVERRIDE (rare use). Normally auto-resolved from the list_filings side-cache. Pass only to override the cache.
filing_descriptionNoOVERRIDE (rare use). Normally auto-resolved.

Output Schema

ParametersJSON Schema
NameRequiredDescription
pagesNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
size_bytesNo
source_urlNo
document_idNo
bytes_base64No
jurisdictionNo
chosen_formatNo
available_formatsNo
bytes_omitted_reasonNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds substantial behavioral context beyond annotations: it explains the two response shapes (embedded vs resource_link), details workflow steps for oversized documents, provides critical rules about citation practices and error handling, describes caching behavior and auto-resolution mechanisms, and explains jurisdiction-specific considerations. While annotations cover basic safety (readOnlyHint, idempotentHint), the description adds rich operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Response Shapes, Workflow, Critical Rules) and uses bullet points effectively. While comprehensive, it maintains focus on essential information - every sentence serves a clear purpose in guiding tool usage. The front-loaded statement about being the 'Primary tool for reading a filing's content' immediately establishes purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, no output schema), the description provides exceptional completeness: it covers response formats, error handling, workflow integration with sibling tools, jurisdiction considerations, caching behavior, and practical usage constraints. The description fully compensates for the lack of output schema by detailing what the tool returns and how to interpret different response types.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 89% schema description coverage, the baseline would be 3, but the description adds meaningful context about parameter usage: it explains the primary use of document_id (from list_filings/get_financials), clarifies that override parameters are for 'rare use' cases, provides practical guidance on format selection ('recommended — XHTML > XML > JSON > PDF'), and explains the practical implications of max_bytes settings. This adds significant value beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states this is the 'Primary tool for reading a filing's content' and distinguishes it from sibling tools by explaining that filing metadata alone is insufficient - the actual content requires this tool. It clearly identifies the verb (reading/fetching) and resource (filing documents).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides comprehensive usage guidance: it specifies when to use this tool ('MANDATORY for any substantive answer'), explains the workflow for different response types, distinguishes when to use sibling tools like 'fetch_document_pages' and 'get_document_navigation', and explicitly states when NOT to use certain approaches ('Don't reflexively retry with a larger max_bytes').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fetch_document_pagesFetch a subset of pages from a cached PDFA
Read-onlyIdempotent
Inspect

Return specific pages of a PDF in one of three formats: • format='pdf' — pdf-lib page slice, preserves the original text layer and fonts (no re-encoding). This is the ONLY format that gives you byte-exact, citation-grade content. Use this for financial numbers, legal quotes, and any answer requiring precision. • format='text' — raw extracted text from pdfjs. Machine-readable but NOT authoritative — OCR errors on bad-quality text layers can silently garble digits. Use only for summarisation / light reading, and cross-check numbers by re-fetching with format='pdf'. • format='png' — page rasterization via Cloudflare Browser Rendering, for documents with text_layer='none' (scanned PDFs). Phase 6 — may return 'not implemented' in current deployment.

The response includes at most 100 pages (Anthropic document-block hard cap). Split larger ranges into multiple calls.

Requires the document's bytes to already be cached — call fetch_document on the full document first if this is a new filing.

ParametersJSON Schema
NameRequiredDescriptionDefault
dpiNoDPI for format='png'. Default 150. 72 for thumbnails, 200+ for high-detail reading.
pagesYesPage spec like '1-5', '3,7,9', or '1,3-5'. 1-based. Max 100 pages per call.
formatNoOutput format. Use 'pdf' for authoritative content (default), 'text' for quick skimming, 'png' for scanned documents.pdf
company_idNoOVERRIDE (rare use). Normally auto-resolved from the list_filings side-cache.
document_idYes
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
transaction_idNoOVERRIDE (rare use). Normally auto-resolved.

Output Schema

ParametersJSON Schema
NameRequiredDescription
textNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
size_bytesNo
document_idNo
bytes_base64No
jurisdictionNo
chosen_formatNo
pages_requestedNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: the 100-page hard cap, the need to cache documents first, format-specific behaviors (e.g., 'pdf' preserves text layers, 'text' may have OCR errors, 'png' may return 'not implemented'), and performance considerations. Annotations cover safety (readOnlyHint=true, destructiveHint=false), but the description enriches this with practical constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with bullet points for format options and clear, front-loaded sentences. Every sentence adds value: it explains formats, usage scenarios, limitations, and prerequisites without redundancy. The information density is high and efficiently organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no output schema), the description is highly complete. It covers purpose, usage guidelines, behavioral traits, parameter semantics, and limitations. The lack of output schema is compensated by detailed format explanations. It provides all necessary context for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 86% schema description coverage, the baseline is 3, but the description adds meaningful context: it explains the semantic differences between format options in detail (e.g., 'pdf' for citation-grade content, 'text' for machine-readable but not authoritative), which complements the schema's enum descriptions. It also clarifies the pages parameter's 100-page limit and splitting advice.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('return specific pages') and resources ('PDF'), distinguishing it from sibling tools like fetch_document (which fetches full documents) and get_document_metadata (which provides metadata). It explicitly mentions the three output formats and their characteristics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use each format ('pdf' for authoritative content, 'text' for summarization, 'png' for scanned documents), when not to use them (e.g., 'text' is not authoritative for numbers), and prerequisites ('call fetch_document on the full document first if this is a new filing'). It also mentions the 100-page limit and suggests splitting larger ranges.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_chargesList the charges (mortgages, secured debt) registered against a companyA
Read-onlyIdempotent
Inspect

Return charges (mortgages, fixed and floating charges, pledges, security interests) registered against a company. Primary tool for security-interest and lender analysis.

Each charge has charge_id, status (outstanding / satisfied / part-satisfied), classification (e.g. 'fixed charge', 'floating charge', 'pledge on share stake'), created_on, satisfied_on if applicable, and persons_entitled (lenders / chargeholders). Raw upstream fields come through verbatim under jurisdiction_data. Returns an empty list (not an error) for companies with no registered charges.

Scope is registry-specific: some jurisdictions keep real-estate mortgages, movable-asset pledges, or receivables in separate registers this tool does not reach. Unsupported jurisdictions return 501; some return 501 and suggest list_filings(category='charges') as an alternative. Per-country scope, classifications, and caveats — call list_jurisdictions({jurisdiction:"<code>"}).

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNo
company_idYes
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNo
itemsNo
chargesNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
next_cursorNo
total_countNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it specifies that empty results return an empty list (not an error), describes jurisdiction-specific limitations, and mentions error codes (501) for unsupported cases. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with core functionality, followed by important caveats and alternatives. Every sentence adds value: the first defines purpose, the second details return fields, the third covers edge cases, and the fourth explains jurisdiction limitations and workarounds. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (jurisdiction-dependent, multiple return fields) and lack of output schema, the description is highly complete. It details return fields (charge_id, status, etc.), edge cases (empty list for no charges), error handling (501 for unsupported jurisdictions), and links to other tools for further context. Annotations cover safety, so the description focuses on operational nuances.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With schema description coverage at only 33% (1 of 3 parameters described), the description compensates by clarifying parameter semantics. It explains that 'jurisdiction' scope affects results and links to `list_jurisdictions` for details, and implies 'company_id' identifies the target company. However, it doesn't detail the 'fresh' parameter's effect.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return charges (mortgages, fixed and floating charges, pledges, security interests) registered against a company.' It uses specific verbs ('return', 'registered against') and resources ('charges', 'company'), and distinguishes itself from siblings by specifying it's for 'security-interest and lender analysis' rather than general company data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use alternatives: 'Unsupported jurisdictions return 501; some return 501 and suggest `list_filings(category='charges')` as an alternative.' It also advises calling `list_jurisdictions` for per-country scope details, giving clear context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_code_descriptionLook up a registry code-list (FI / CZ / CH)A
Read-onlyIdempotent
Inspect

Resolve a registry code-list to a human-readable code → description map. Useful for decoding values seen in jurisdiction_data fields.

── FI (Finland PRH) ── Codes: YRMU (company forms), KRTILA (trade-register status), TOIMI4 (TOL 2008 industries), ALUE (regions). lang: en | fi | sv (default en).

── CZ (Czechia ARES) ── Codes: PravniForma / FinancniUrad / TypAngazma / TypOrganu / StavZdroje / TypAkcie. Czech only.

── CH (Switzerland Zefix) ── Codes: legalForm (entity types — AG/Sàrl/Verein/...), registryOfCommerce (26 cantonal registries), community (Swiss communes by BFS ID). Multilingual (de/fr/it/en).

Returns a flat object: { code: description, … }. Pricing: free.

Other jurisdictions return 501.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesCode-list identifier. FI: YRMU/KRTILA/TOIMI4/ALUE. CZ: PravniForma/FinancniUrad/TypAngazma/TypOrganu/StavZdroje/TypAkcie. CH: legalForm/registryOfCommerce/community.
langNoDescription language (FI only — CZ ignores).en
freshNoBypass the cache.
jurisdictionYes'FI', 'CZ', or 'CH'.

Output Schema

ParametersJSON Schema
NameRequiredDescription
codeNo
dataNo
entriesNo
code_setNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
descriptionNo
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable behavioral context beyond annotations: it specifies the return format ('flat object: { code: description, … }'), mentions pricing ('free'), and documents jurisdiction limitations ('Other jurisdictions return 501'). However, it doesn't mention caching behavior despite the 'fresh' parameter existing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with clear jurisdiction sections, bullet-like formatting for code examples, and no wasted words. Every sentence adds value: purpose statement, jurisdiction-specific details, return format, pricing, and error conditions. The information is front-loaded with the core purpose first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only lookup tool with comprehensive annotations and 100% schema coverage, this description provides excellent contextual completeness. It covers jurisdiction variations, code examples, language support, return format, pricing, and error cases. The lack of output schema is compensated by explicitly describing the return format. No significant gaps remain for this type of tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds meaningful context beyond the schema: it organizes parameters by jurisdiction (FI/CZ/CH sections), provides concrete code examples for each jurisdiction, clarifies language applicability ('FI only — CZ ignores'), and explains the purpose of the 'fresh' parameter ('Bypass the cache'). This significantly enhances understanding beyond the schema's technical definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'resolve' and resource 'registry code-list', specifying it converts codes to human-readable descriptions. It explicitly distinguishes this from sibling tools by mentioning its specific use for 'decoding values seen in jurisdiction_data fields', which none of the sibling tools appear to do.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance ('useful for decoding values seen in jurisdiction_data fields'), jurisdiction-specific code lists, language support details, and clear exclusions ('Other jurisdictions return 501'). It also distinguishes when language matters (FI only) versus when it's ignored (CZ).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_company_profileGet the structured profile of a company by its registry IDA
Read-onlyIdempotent
Inspect

Fetch the full profile of a company by its registry-specific ID. Returns unified top-level fields (jurisdiction, company_id, company_name, status, status_detail, incorporation_date, registered_address) plus a jurisdiction_data object carrying the raw upstream fields verbatim. The status field is a coarse four-value enum (active / inactive / dissolved / unknown) safe for cross-country comparison; status_detail carries the registry's native status string. The registered_address top-level field is a flattened string; the original nested address (when upstream provides one) is preserved in jurisdiction_data.

Does NOT include filings, officers, PSCs, shareholders, or charges — call the dedicated tools (list_filings, get_officers, get_persons_with_significant_control, get_shareholders, get_charges) for those.

Input company_id is the registry's canonical identifier for the jurisdiction; shapes vary (8-digit numbers, prefixed alphanumerics, hyphenated forms, multi-shape routing by length, etc.) and many registries accept light normalisation (leading-zero padding, whitespace / hyphen stripping, alternate equivalents). Pull a company_id from search_companies whenever possible rather than guessing.

Optional flags include_vr, include_history, include_establishments enable extra upstream fetches on jurisdictions that support them and are ignored elsewhere. fresh: true bypasses the cache.

Per-country caveats (ID format, accepted input shapes, jurisdiction_data field catalogue, paid-tier gates, status taxonomy) are available on demand — call list_jurisdictions({jurisdiction:"<code>"}) for the full schema, or list_jurisdictions({supports_tool:"get_company_profile"}) for the country-support matrix. All registries are official government sources.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache. Default false.
company_idYesRegistry-specific company identifier. GB: 8-digit Companies House number (e.g. '00445790'), or SC/NI/OC/LP prefix (e.g. 'SC123456'). NO: 9-digit organisation number (e.g. '923609016'). AU: 11-digit ABN (e.g. '16009661901') or 9-digit ACN (e.g. '009661901'). IE: numeric (e.g. '104547'); add '/B' suffix to query the business-name register (e.g. '540274/B'). FR: 9-digit SIREN (e.g. '652014051') or 14-digit SIRET (auto-resolved to parent SIREN). FI: Y-tunnus '7digits-1digit' (e.g. '0112038-9'); 8-digit no-dash form auto-reformatted. CZ: 8-digit IČO (e.g. '27074358'); 1-7 digit values auto-padded. PL: 10-digit KRS number (e.g. '0000635012'); 1-9 digit values auto-padded. See the tool description for full details.
include_vrNoCZ only. When true, also fetch the VR (commercial register) record and merge it under jurisdiction_data._vr — adds spisovaZnacka (case file number array; use latest by datumZapisu), zakladniKapital (share capital history with vklad/splaceni), akcie (share emissions), cinnosti (registered business activities), insolvence + konkursy (insolvency proceedings — full records, not just the administrators surfaced via get_officers), rejstrik (public-register type), stavSubjektu. Slower than the basic profile (one extra upstream call) but avoids needing get_officers/PSC/charges just to inspect these fields.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
include_historyNoPL only. When true, ALSO fetch /OdpisPelny and graft the full historical entry log (naglowekP.wpis[] — every change ever made to the KRS record, with numerWpisu / opis / dataWpisu / sygnaturaAktSprawyDotyczacejWpisu) onto jurisdiction_data. Doubles upstream calls.
include_establishmentsNoBE only. When true, ALSO fetch the vestiginglijst (establishment-units list) and graft it onto jurisdiction_data.establishments[] — each unit's 10-digit vestigingsnummer, status, start date, name, and address. One extra upstream call; omit to just get the count + establishments_list_url.

Output Schema

ParametersJSON Schema
NameRequiredDescription
statusNoFour-value unified status safe for cross-jurisdiction comparison.
company_idNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
company_nameNo
jurisdictionNo
status_detailNo
jurisdiction_dataNoFull original response fields from the upstream registry, field names unchanged. Shape is jurisdiction-specific — see `list_jurisdictions({ jurisdiction: '<CODE>' })`.
incorporation_dateNo
registered_addressNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations. While annotations declare read-only, non-destructive, and idempotent operations, the description details the return structure (unified fields plus jurisdiction_data), explains status field semantics, describes caching behavior ('fresh: true bypasses the cache'), and mentions performance implications of optional flags ('slower', 'doubles upstream calls'). No contradictions with annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and efficiently packed with information. It front-loads the core purpose, then details return values, exclusions, input guidance, optional flags, and per-country caveats. While comprehensive, some sentences are lengthy, and the density of information might slightly reduce immediate clarity, though every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema, rich annotations), the description is largely complete. It covers purpose, usage, return structure, exclusions, input guidance, and jurisdictional nuances. However, without an output schema, it could more explicitly detail the full return format or error conditions, though the annotations provide safety context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds meaningful context beyond the schema: it explains the variability of company_id formats, provides guidance on obtaining company_id from 'search_companies', clarifies that optional flags are jurisdiction-specific and ignored elsewhere, and explains the purpose of the fresh parameter. However, it doesn't fully detail all parameter interactions or edge cases.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetch the full profile of a company by its registry-specific ID.' It specifies the verb ('fetch'), resource ('company profile'), and key identifier ('registry-specific ID'), and distinguishes it from siblings by explicitly listing what it does NOT include (filings, officers, etc.) and naming alternative tools for those purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states what the tool does NOT include and names specific sibling tools for those purposes (e.g., 'list_filings', 'get_officers'). It also advises pulling company_id from 'search_companies' rather than guessing, and directs users to 'list_jurisdictions' for per-country details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_document_metadataGet metadata for a filing document before downloading itA
Read-onlyIdempotent
Inspect

Retrieve metadata about a filing document by its document_id (obtained from list_filings). Returns available content formats with byte sizes (when known), page count, source URL, and creation date. Raw upstream fields come through verbatim under jurisdiction_data.

Call this before fetch_document when the document might be large or you don't yet know the format — it lets you decide whether to download inline or hand the source_url to the user.

Do NOT construct or guess document_id values — some registries use composite IDs (multi-part, colon- or slash-separated) that must come from a previous list_filings response. Synthesized IDs will 404 or 502.

available_formats may be empty when the body is paywalled or the registry doesn't publish bodies at all — in those cases fetch_document returns 501 / a purchase link. Unsupported jurisdictions return 501. Per-country ID format, pricing, and availability — call list_jurisdictions({jurisdiction:"<code>"}).

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNo
document_idYesDocument ID from a previous list_filings call.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
pagesNo
created_atNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
source_urlNo
document_idNo
jurisdictionNo
available_formatsNo
size_bytes_by_formatNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it explains error conditions (404/502 for synthesized IDs, 501 for paywalled/unsupported jurisdictions), availability caveats (available_formats may be empty), and the purpose of checking metadata before downloading large documents. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear paragraphs: first states purpose and returns, second provides usage guidance, third gives critical warnings, and fourth covers edge cases. Every sentence adds value without redundancy, and key points are front-loaded (e.g., the warning about not constructing IDs is emphasized).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (metadata retrieval with jurisdiction-specific behaviors), the description is highly complete. It covers purpose, usage, prerequisites, error cases, and relationships to other tools. While there's no output schema, the description details return values (formats, sizes, page count, etc.). The annotations provide safety context, and the description fills in behavioral nuances adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67% (2 of 3 parameters have descriptions). The description adds meaningful context for document_id beyond the schema's 'Document ID from a previous list_filings call' by warning against constructing IDs and explaining composite ID formats. It also clarifies jurisdiction usage by referencing list_jurisdictions for details. However, it doesn't explicitly address the 'fresh' parameter's semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('retrieve metadata') and resource ('filing document'), distinguishing it from sibling tools like fetch_document (which downloads content) and list_filings (which lists documents). It explicitly mentions what metadata is returned (content formats, byte sizes, page count, source URL, creation date, jurisdiction_data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('call this before fetch_document when the document might be large or you don't yet know the format') and when not to use it ('do NOT construct or guess document_id values'). It names specific alternatives (list_filings for obtaining IDs, fetch_document for downloading, list_jurisdictions for jurisdiction details).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_document_navigationOpen the navigation index for a cached documentA
Read-onlyIdempotent
Inspect

For PDFs that don't fit in a single document block (>~20 MB or >100 pages) OR whenever you need to locate specific sections, call this FIRST before fetching content. Returns outline (PDF bookmarks), per-page text previews (first ~200 chars), keyword-matched landmarks (balance sheet, directors report, auditor report etc.), text_layer classification, and source URLs.

CRITICAL — these are NAVIGATION AIDS ONLY. Page previews, outline titles, landmark matches, and search snippets may be truncated, contain OCR errors, or match false positives. NEVER cite them as source material for numbers, quotes, legal text, financial figures, dates, or names. Always follow up with fetch_document_pages(pages=) to retrieve authoritative content before answering.

Requires the document bytes to already be cached — call fetch_document once first if this is a new document.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoSet true to ignore the cached index.json and re-run pdfjs against the stored source bytes. Does not re-pull from upstream.
company_idNoOVERRIDE (rare use). Normally auto-resolved from the list_filings side-cache.
document_idYes
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
transaction_idNoOVERRIDE (rare use). Normally auto-resolved.

Output Schema

ParametersJSON Schema
NameRequiredDescription
pagesNo
outlineNo
headingsNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
document_idNo
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context: navigation aids may be truncated, contain OCR errors, or have false positives, and it explains the caching requirement and the 'fresh' parameter's effect. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections: purpose, returns, critical warnings, and prerequisites. It's front-loaded with key information, though it could be slightly more concise by integrating some details (e.g., jurisdiction support) more tightly. Every sentence adds value, but minor redundancy exists.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (navigation for large documents) and lack of output schema, the description is mostly complete: it explains purpose, usage, behavioral caveats, and prerequisites. However, it doesn't detail the exact structure of returned navigation data (e.g., format of outlines or landmarks), which could aid agent interpretation. Annotations cover safety aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 80%, so the schema documents most parameters well. The description adds context for 'jurisdiction' by mentioning it's for official government sources and referencing 'list_jurisdictions' for details, but doesn't explain other parameters like 'document_id' or 'fresh' beyond what the schema provides. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Open the navigation index for a cached document' and specifies it returns 'outline (PDF bookmarks), per-page text previews, keyword-matched landmarks, text_layer classification, and source URLs.' It distinguishes from siblings like 'fetch_document_pages' by emphasizing this is for navigation aids only, not authoritative content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: call this FIRST for large PDFs or when locating sections, NEVER cite navigation aids as source material, and always follow up with 'fetch_document_pages' for authoritative content. It also specifies prerequisites: requires cached document bytes and to call 'fetch_document' first for new documents, clearly differentiating from alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_financialsList a company's annual financial statementsA
Read-onlyIdempotent
Inspect

Return annual-accounts filings (financial statements) for a company. Convenience wrapper over list_filings(category='accounts') that normalizes the fiscal-period shape across registries and pre-computes the download URL so callers don't need a second get_document_metadata round-trip.

Each item has period_end (fiscal-period end date, the primary sort key a user thinks in), optional period_start / registration_date, a document_id that can be passed to fetch_document, document_format (e.g. XBRL XML, XHTML, PDF — may be empty when the upstream negotiates format on fetch), source_url for direct download, and jurisdiction_data carrying raw upstream fields verbatim. Results are newest-first.

Filters: year=YYYY keeps periods ending in that calendar year; period_end=YYYY-MM-DD pinpoints a single period (takes precedence over year). limit caps the post-filter slice — omit to return all matches. The whole accounts history is walked per query because late-filed amendments can land out of order.

If the adapter doesn't implement list_filings at all, this returns 501. Per-country caveats (ID format, document format availability, whether bodies are paid) — call list_jurisdictions({jurisdiction:"<code>"}).

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNoFilter to fiscal periods ending in this calendar year (e.g. 2024 → any period_end starting '2024-'). Useful when the company uses a non-calendar fiscal year.
freshNo
limitNoCap on returned items. Omit to return ALL matching items (post-filter). GB and FI native paths paginate through every accounts filing (GB up to 2000); the fallback projection over list_filings is bounded by the adapter's own list_filings page size.
company_idYesRegistry-specific company ID.
period_endNoFilter to an exact fiscal period end date (YYYY-MM-DD, e.g. '2024-12-31'). Takes precedence over `year`.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
next_cursorNo
total_countNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations. Annotations indicate read-only, open-world, idempotent, and non-destructive, but the description elaborates on implementation details: it normalizes fiscal-period shape, pre-computes download URLs, returns newest-first, walks the entire accounts history due to late-filed amendments, and mentions error conditions (501 if adapter lacks support). It also describes the return structure (items with fields like `period_end`, `document_id`, etc.) and per-country caveats, providing rich behavioral transparency that annotations alone don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and key enhancements. It efficiently covers usage, behavior, parameters, and caveats in a structured manner without unnecessary repetition. However, it could be slightly more concise by integrating some details (e.g., the per-country caveats paragraph is lengthy but necessary), so it's not perfectly minimal but still highly effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no output schema), the description is complete enough. It covers the purpose, usage guidelines, behavioral traits, parameter semantics, error conditions, and per-country considerations. While there's no output schema, the description details the return structure (items with specific fields), and annotations provide safety context. This addresses all necessary aspects for an agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With schema description coverage at 83%, the baseline is 3, but the description adds meaningful context beyond the schema. It explains filter precedence (`period_end` takes precedence over `year`), clarifies that `limit` caps post-filter results and can be omitted to return all matches, and notes that the whole accounts history is walked per query. However, it doesn't detail all parameters (e.g., `fresh` is not mentioned), so it doesn't fully compensate for the 17% gap, warranting a 4 rather than a 5.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'returns annual-accounts filings (financial statements) for a company' and distinguishes it from sibling tools by explaining it's a 'convenience wrapper over `list_filings(category='accounts')`' with specific enhancements like normalization and pre-computed URLs. This provides a specific verb+resource+scope and differentiates from alternatives like `list_filings` or `fetch_document`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool vs alternatives: it's a convenience wrapper over `list_filings(category='accounts')` that adds normalization and pre-computed URLs to avoid a second round-trip. It also mentions when not to use it (if the adapter doesn't implement `list_filings`, it returns 501) and provides guidance on per-country caveats by referring to `list_jurisdictions`. This covers explicit when/when-not/alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_officer_appointmentsList every company an officer has been appointed toA
Read-onlyIdempotent
Inspect

Given an officer_id (from get_officers or search_officers), return every company in the registry where that person has held an appointment, with role, appointed_on, and resigned_on dates. This is the cross-company tracing tool — use it to follow a person's full corporate footprint across the registry. Results are paginated.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNo
limitNo
cursorNo
officer_idYesOfficer ID from a previous get_officers or search_officers call.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNo
itemsNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
next_cursorNo
total_countNo
appointmentsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it discloses pagination ('Results are paginated'), which is crucial for usage, and implies data freshness considerations through the 'fresh' parameter in the schema. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states the core functionality, second provides usage context and differentiation, third discloses pagination. Every sentence adds value without redundancy, and it's front-loaded with the main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, paginated results) and rich annotations covering safety, the description is largely complete: it explains purpose, usage, and key behavior (pagination). However, with no output schema, it doesn't describe return values or error handling, which could be helpful for an agent. The parameter guidance is adequate but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 40%, with officer_id and jurisdiction having descriptions but fresh, limit, and cursor lacking them. The description compensates by explaining the officer_id parameter's source ('from get_officers or search_officers') and the tool's overall purpose, which helps infer parameter roles. However, it doesn't detail all parameters like cursor or fresh, keeping it from a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('return every company'), resource ('in the registry'), and scope ('where that person has held an appointment') with distinguishing details like 'cross-company tracing tool' and 'follow a person's full corporate footprint' that differentiate it from sibling tools like get_officers or search_officers. It explicitly mentions the returned fields (role, appointed_on, resigned_on dates).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Given an officer_id from get_officers or search_officers') and its purpose ('cross-company tracing tool — use it to follow a person's full corporate footprint across the registry'), clearly distinguishing it from sibling tools that focus on single-company data or different search methods. It effectively tells the agent when this specific tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_officersList the officers (directors, secretaries) of a companyA
Read-onlyIdempotent
Inspect

Return the officers of a company — current directors, secretaries, members, partners, board members, procurists / authorised signatories, liquidators, and (by default, where upstream exposes them) historical resignations.

Each officer has a unified shape (jurisdiction, officer_id, name, role, appointed_on, resigned_on, is_active) plus a jurisdiction_data object carrying the raw upstream fields verbatim. Role labels are passed through in the registry's native language (e.g. Styremedlem, Předseda představenstva, Président, PREZES ZARZĄDU) — translate client-side as needed. Birth-date precision varies by jurisdiction (some registries publish YYYY-MM-DD, some only month + year, some nothing).

officer_id, when present, can be passed to get_officer_appointments to retrieve every other company this person has been appointed to — cross-company tracing is one of the most powerful uses of this tool. Not every jurisdiction issues stable person IDs; corporate officers are usually keyed by the corporate's own company_id, natural persons may be keyed by a synthetic index. Some registries mask officer names under GDPR / privacy rules — that masking is upstream, not server-side.

Flags: include_resigned (default true) toggles historical entries on jurisdictions that expose both; group_by_person deduplicates the same person across consecutive appointments on jurisdictions that support it; fresh: true bypasses the cache. Flags are ignored on registries that don't support them. Jurisdictions that don't publish officer data (or that gate it behind paid extracts) return 501.

Per-country caveats (role-label vocabulary, birth-date precision, resignation coverage, GDPR masking, 501 gating, delta-vs-snapshot semantics) are available on demand — call list_jurisdictions({jurisdiction:"<code>"}) for the full schema, or list_jurisdictions({supports_tool:"get_officers"}) for the country-support matrix. All registries are official government sources.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNo
company_idYes
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
group_by_personNoCZ only. When true, dedupe the same person across multiple appointments (e.g. board member → chair → vice-chair) into a single entry. Identity key is (name + datumNarozeni for natural persons, or pravnickaOsoba.ico for corporate). Each grouped entry's jurisdiction_data._appointments[] lists all roles with their dates. Default false (returns one entry per appointment, matching GB behaviour).
include_resignedNoInclude officers who have resigned. Default true. Set to false to get only currently serving officers.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNoAdapter returns a bare array; textResult() wraps under `data`.
itemsNo
officersNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
next_cursorNo
total_countNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it explains GDPR masking, jurisdiction-specific limitations (e.g., birth-date precision, 501 gating), cache bypass with 'fresh: true', and how flags are ignored on unsupported registries. While annotations cover read-only, open-world, and idempotent hints, the description enriches this with practical constraints and data source details without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with core functionality, followed by details on data shape, flags, and caveats. While comprehensive, it remains focused with minimal redundancy, though some sentences could be slightly tightened (e.g., the per-country caveats paragraph is dense but informative).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no output schema, rich annotations), the description is highly complete: it covers purpose, usage, data format, parameter semantics, jurisdiction-specific behaviors, and links to other tools for further details. It addresses gaps from missing output schema by describing the unified shape of returned officers and potential errors like 501.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With schema description coverage at 60%, the description compensates by explaining parameter implications in detail: it clarifies the default and effect of 'include_resigned', specifies that 'group_by_person' is for CZ only, and notes that 'fresh' bypasses cache. It also adds context for 'jurisdiction' and 'company_id' by linking to other tools for support details, enhancing understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return the officers of a company' with specific details about what constitutes an officer (directors, secretaries, members, etc.) and distinguishes it from sibling tools like 'get_officer_appointments' by explaining the relationship between them. It goes beyond a simple list to explain the unified shape of returned data and cross-company tracing capabilities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: it mentions using 'get_officer_appointments' for cross-company tracing with officer_id, and directs users to 'list_jurisdictions' for per-country caveats and support details. It also explains when certain flags are applicable (e.g., 'group_by_person' for CZ only) and when jurisdictions return 501 errors.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_persons_with_significant_controlList the persons with significant control (beneficial owners) of a companyA
Read-onlyIdempotent
Inspect

Return the persons with significant control (PSCs / beneficial owners) of a company — persons on a statutory-threshold register (typically >25% ownership or voting rights).

When to call this tool. Only when the user explicitly asks about 'beneficial owners', 'UBO', 'PSC', 'who controls', or the >25% threshold register. For plain 'shareholders' / 'members' / '股东' / '持股人' questions, call get_shareholders instead — it is a DIFFERENT register (the full equity roster with no threshold). A 10% shareholder shows up on the members register but not here; a corporate trustee can show up here without being on the members register.

Each entry has name, kind (individual / corporate-entity / etc.), nature_of_control (e.g. ownership-of-shares-75-to-100-percent, voting-rights-25-to-50-percent), notified_on, and ceased_on if applicable. Raw upstream fields come through verbatim under jurisdiction_data. Returns an empty list (not an error) for companies whose registry supports PSCs but has no filing on record.

Many countries keep beneficial-ownership data in a separate register from the main company registry, or restrict it to authenticated / AML-obliged callers. Unsupported jurisdictions return 501, sometimes with alternative_tool='get_shareholders' when the caller probably wanted registered shareholders instead. Per-country availability, historical-entry behaviour, and paid-tier gates — call list_jurisdictions({jurisdiction:"<code>"}).

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNo
company_idYes
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.
include_ceasedNoCZ only. Include historical PSCs (those with a ceased_on date). Default false. GB returns historical PSCs by default; CZ does not — set this to true to match GB behaviour.

Output Schema

ParametersJSON Schema
NameRequiredDescription
pscNo
dataNo
itemsNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
next_cursorNo
total_countNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only, open-world, idempotent, and non-destructive hints, but the description adds significant behavioral context beyond this. It explains that returns may be an empty list for companies with no filing, unsupported jurisdictions return 501 with alternative tool suggestions, and details on per-country availability, historical-entry behavior, and paid-tier gates. This enriches the agent's understanding of edge cases and limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by usage guidelines, output details, and behavioral notes. Each sentence adds value without redundancy, such as clarifying sibling tool differences and jurisdiction-specific behaviors, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (involving jurisdiction-specific rules, sibling tool distinctions, and no output schema), the description is highly complete. It covers purpose, usage, output fields, edge cases (empty lists, error codes), and directs to other tools for further context, ensuring the agent has sufficient information to use the tool effectively despite the lack of structured output details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, with only the 'jurisdiction' and 'include_ceased' parameters described in the schema. The description compensates by explaining the output structure (e.g., fields like 'name', 'kind', 'nature_of_control') and contextualizing parameters indirectly through examples of jurisdiction codes and usage notes. However, it doesn't explicitly detail all input parameters beyond what the schema provides, keeping it from a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool returns persons with significant control (PSCs/beneficial owners) of a company, specifying these are individuals on a statutory-threshold register typically with >25% ownership or voting rights. It clearly distinguishes this from the sibling tool `get_shareholders` by explaining the different registers and thresholds, making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to call this tool (e.g., for 'beneficial owners', 'UBO', 'PSC', 'who controls', or >25% threshold queries) and when not to (e.g., for plain 'shareholders' questions, directing to `get_shareholders` instead). It also mentions alternative tools like `list_jurisdictions` for checking availability, offering clear usage context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_shareholdersList the shareholders / members of a companyA
Read-onlyIdempotent
Inspect

Return the shareholders / members / quota-holders of a company — the legal-statutory equity roster published by the company registry, with no ownership-threshold filter.

When to call this tool. Use this whenever the user asks about 'shareholders', 'members', 'quota-holders', or equivalents in other languages ('股东', '股東', 'actionnaires', 'socios', 'Gesellschafter', 'aksjonærer', 'aandeelhouders' etc.). This is a DIFFERENT concept from get_persons_with_significant_control (PSC / beneficial owners / UBO), which returns only persons above a statutory control threshold (typically >25%) on a separate beneficial-ownership register. Do NOT substitute PSC for a plain shareholder question — the two registers can disagree (a 10% shareholder is on the members register but not the PSC register; a corporate trustee can be a PSC without appearing on the members register). Call PSC only when the user explicitly asks about 'beneficial owners', 'who controls', 'PSC', 'UBO', or the threshold register.

Public disclosure is strongly legal-form-conditional. Private-limited / LLC forms typically disclose quota-holders in the public register; joint-stock / public-limited forms typically keep shareholders in a private book, so this tool may return an empty list, a pointer to the relevant filing, or a statutory explanation. Response shape varies by jurisdiction: some return a structured array, some return the filing(s) that carry the roster (you then call fetch_document on the returned document_id to read the actual list), some return threshold-crossing events for listed issuers. Every response includes a disclosure flag and/or explanatory note.

Always returns a jurisdiction_data object with the raw upstream fields verbatim. fresh: true bypasses the cache. Jurisdictions without this capability return 501.

Per-country caveats (which legal forms disclose, response shape, how to reconstruct a current roster from delta filings) are available on demand — call list_jurisdictions({jurisdiction:"<code>"}) for the full schema, or list_jurisdictions({supports_tool:"get_shareholders"}) for the country-support matrix. All registries are official government sources.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNo
company_idYes
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNo
as_ofNo
itemsNo
company_idNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
total_countNo
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, open-world, idempotent, and non-destructive hints. The description adds valuable context beyond annotations: it explains that responses may vary by jurisdiction (e.g., empty lists for joint-stock companies, structured arrays or document pointers), includes disclosure flags, mentions caching behavior with `fresh: true`, and notes that unsupported jurisdictions return 501. It also references per-country caveats and how to access them via other tools, though it doesn't detail rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., 'When to call this tool,' public disclosure details, per-country caveats) and uses bold for emphasis. It is appropriately sized for the tool's complexity, though some sentences are lengthy. Every sentence adds value, such as explaining jurisdictional variations and tool interactions, with minimal redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (legal-registry data with jurisdictional variations), no output schema, and rich annotations, the description is highly complete. It covers purpose, usage guidelines, behavioral nuances (e.g., response shapes, caching, error codes), parameter context, and references to other tools for further details. It adequately compensates for the lack of output schema by describing possible response types and flags.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (33%), with only the 'jurisdiction' parameter described in the schema. The description compensates by explaining the purpose of 'fresh' (bypasses cache) and implying 'company_id' identifies the target company. It also adds context about jurisdiction codes (ISO 3166-1 alpha-2) and references `list_jurisdictions` for details, though it doesn't fully document all parameter formats or constraints beyond what's implied.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns 'the shareholders / members / quota-holders of a company' from the 'legal-statutory equity roster published by the company registry,' with explicit scope ('no ownership-threshold filter'). It distinguishes from sibling `get_persons_with_significant_control` by explaining the different registers and concepts, making the purpose specific and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (e.g., for 'shareholders', 'members', 'quota-holders' in various languages) and when not to use it (e.g., not for PSC/beneficial owners unless explicitly asked). It names the alternative tool (`get_persons_with_significant_control`) and explains the conceptual differences, including examples of disagreement between registers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_specialised_recordFetch a CZ-specific source-register recordA
Read-onlyIdempotent
Inspect

Retrieve a record from one of ARES's specialised source registers — covers sector-specific data the basic profile doesn't include.

── CZ (Czechia ARES) ── Available source codes: • ros — Public Registers (Registr osob) summary state • res — Statistical Register of Economic Entities (Registr ekonomických subjektů) • rzp — Trade Licence Register (Registr živnostenského podnikání) — trade licences a sole trader / company holds • nrpzs — National Register of Healthcare Providers (Národní registr poskytovatelů zdravotních služeb) — for hospitals, clinics, pharmacies • rpsh — Register of Political Parties and Movements (Registr politických stran a hnutí) • rcns — Register of Churches and Religious Societies (Registr církví a náboženských společností) • szr — Farmers' Register (Registr zemědělských podnikatelů) • rs — Register of Schools (Registr škol) • ceu — Central Insolvency Record (Centrální evidence úpadců)

Each source returns its own response shape — refer to ARES API docs at https://ares.gov.cz/swagger-ui/ for field details. The full upstream record is returned verbatim under record. Use this when the basic get_company_profile or get_officers/get_psc/get_charges don't have the field you need (e.g. trade-licence specialisations for sole traders → rzp, school accreditation details → rs, healthcare facility list → nrpzs). Returns 404 if the IČO doesn't exist in that specific source register. Pricing: free.

Other jurisdictions return 501.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNo
sourceYesSource register code (lowercase). CZ: ros/res/rzp/nrpzs/rpsh/rcns/szr/rs/ceu. CH: sogc (SOGC publication by ID), sogc_bydate (YYYY-MM-DD), registry_by_commune (BFS community ID).
company_idYesFor CZ: 8-digit IČO. For CH: SOGC publication ID (sogc), date YYYY-MM-DD (sogc_bydate), or BFS community ID (registry_by_commune).
jurisdictionYes'CZ' or 'CH'.

Output Schema

ParametersJSON Schema
NameRequiredDescription
recordNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
record_kindNo
jurisdictionNo
jurisdiction_dataNoFull original response fields from the upstream registry, field names unchanged. Shape is jurisdiction-specific — see `list_jurisdictions({ jurisdiction: '<CODE>' })`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide: it explains the 404 error condition ('Returns 404 if the IČO doesn't exist in that specific source register'), mentions pricing ('Pricing: free'), and describes the response structure ('full upstream record is returned verbatim under `record`'). While annotations cover read-only/idempotent aspects, the description provides important operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, source code list, usage guidance, error conditions), but contains some redundancy (repeating 'CZ' in the source code list header when already mentioned in title). Most sentences earn their place by providing essential information, though it could be slightly more concise in the source code explanations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple jurisdictions, many source registers, different parameter formats) and lack of output schema, the description provides comprehensive context: it explains what each source register contains, when to use the tool, error conditions, pricing, response structure, and references external documentation. This adequately compensates for the missing output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 75% schema description coverage, the description adds significant value by providing the complete list of CZ source codes with explanations of what each register contains (e.g., 'rzp — Trade Licence Register... trade licences a sole trader / company holds'), which helps users understand the semantic meaning of the 'source' parameter beyond what the schema provides. The description also clarifies jurisdictional differences in parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('retrieve') and resource ('record from specialised source registers'), and explicitly distinguishes it from sibling tools by naming specific alternatives (get_company_profile, get_officers/get_psc/get_charges) and explaining when to use this tool instead ('when the basic... don't have the field you need').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives ('Use this when the basic get_company_profile or get_officers/get_psc/get_charges don't have the field you need'), includes specific examples of use cases (e.g., 'trade-licence specialisations for sole traders → rzp'), and mentions jurisdictional limitations ('Other jurisdictions return 501').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_actos_inscritosLocate BORME Section I acto-inscripción entries for a Spanish companyA
Read-onlyIdempotent
Inspect

Scans BORME Section I (Empresarios — Actos inscritos) and Section B (Otros actos) province PDFs across a date range, returns every acto paragraph whose header line exactly matches the given denominación. Use this to recover current/historical directors, resignations, constitución details, sole-shareholder declarations, capital changes, dissolution, extinción — all statutory registered acts, which are NOT exposed by the per-company /buscar/anborme.php search (that indexes only Section II Anuncios y avisos legales).

Each hit includes the verbatim acto_numero (BOE's in-year sequential), borme_a_id (BORME-A-YYYY-NNN-PP province bulletin), provincia title, source_pdf_url, pagina_inicial/final, denominacion_upstream (as printed, including accent/punctuation drift), and texto_raw — the complete paragraph text from the {Nº} - {DENOMINACIÓN}. header to the next acto boundary. The adapter does NOT parse specific fields (Nombramientos / Ceses / Socio único / Capital / Datos registrales) — read texto_raw and extract inline.

Performance: each day in the range costs ~N province PDF fetches (N ≤ 52 provinces but typically 20–40). Default window: last 30 days. Hard cap: 90 days per call. Chunk longer windows client-side. Supported on ES only. Cached on the VM (PDFs are immutable once published).

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass the PDF extraction and sumario caches.
limitNoMax acto entries to return (default 50, max 200).
cursorNoPagination cursor from a previous response.
date_toNoEnd of the scan window, ISO YYYY-MM-DD. Defaults to today.
act_typesNoPre-filter acto paragraphs by canonical act-type key (or raw Spanish verbatim substring). Supported keys: 'nombramiento', 'cese', 'dimision', 'revocacion', 'adm-unico', 'liquidador', 'apoderado', 'poderes', 'consejero', 'auditor', 'socio-unico', 'unipersonalidad', 'ampliacion-capital', 'reduccion-capital', 'capital', 'constitucion', 'disolucion', 'extincion', 'transformacion', 'modificacion-estatutos', 'cambio-denominacion', 'cambio-domicilio', 'cambio-objeto-social', 'fusion', 'escision', 'cesion-global', 'concurso', 'prorroga'. Unrecognised keys are treated as case/accent-insensitive Spanish substring match against texto_raw (power-user escape hatch).
date_fromNoStart of the scan window, ISO YYYY-MM-DD. Defaults to 30 days before today.
company_idYesExact denominación social as emitted by the BORME header line — legal-form suffix required (e.g. 'ALICANTE PARK SL.', 'TELEFÓNICA, S.A.', 'REPSOL, S.A.'). Matching is accent-insensitive and treats '.', ',', and whitespace as interchangeable separators, but it is NOT substring matching — 'FOO SL' will not match 'FOO HOLDING SL'. Call search_companies first to obtain the canonical form surfaced in recent Section II publications.
jurisdictionYes'ES' only.
province_filterNoRestrict the scan to these BORME-A province codes (2-digit PP suffix of 'BORME-A-YYYY-NNN-PP'; e.g. '03'=Alicante, '28'=Madrid, '08'=Barcelona). Empty → adapter consults its per-company province cache, falling back to all provinces on miss. Use this when you already know the company's Registro Mercantil.
bypass_province_cacheNoDisable the per-company province cache for this call. Useful when a company has moved domicilio and previous cache would miss new actos.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNo
actosNo
countNo
company_idNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations: performance details (cost per day, default/hard caps, caching), geographic restriction (ES only), and output format (what each hit includes). It does not contradict annotations, but could mention rate limits or error handling more explicitly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the first sentence stating the core purpose. It uses paragraphs to organize information (purpose, output details, performance, restrictions), but some sentences are lengthy and could be tightened for better readability without losing essential details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters, no output schema) and rich annotations, the description is mostly complete. It covers purpose, usage, behavioral traits, and output structure. However, it lacks explicit details on error cases or response format beyond hit fields, which could be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 10 parameters thoroughly. The description adds some semantic context, such as explaining that the adapter does not parse specific fields from texto_raw, but it does not provide significant additional meaning beyond what the schema descriptions already cover for parameters like company_id or act_types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool scans BORME Section I and B PDFs across date ranges to return acto paragraphs matching a company name, distinguishing it from the per-company search (which only indexes Section II). It specifies the resource (BORME PDFs) and verb (scans/returns), with explicit differentiation from sibling tools like search_companies.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool (to recover statutory registered acts not exposed by the per-company search) and when not to use it (for Section II announcements). It mentions calling search_companies first to obtain the canonical company name form, and it notes performance considerations like date range caps and chunking longer windows client-side.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_change_batchesList change-notification batches or fetch one (CZ only)A
Read-onlyIdempotent
Inspect

Access the change-batch feed — incremental delta batches listing every company ID that changed in a given source register during a given reporting period. Currently only available for CZ; other jurisdictions return 501.

Two modes:

  1. List mode (omit batch_id): returns the N latest batches, optionally filtered by source. Page forward via offset/limit until the returned array is empty — the upstream has no total-count field.

  2. Detail mode (supply batch_id + source): returns change records in that batch with typZmeny ∈ {INS, UPD, DEL} and the changed company ID. Response also carries total_changes (full batch size) and pagination: { limit, offset, has_more }. Client-side sliced — batches can exceed 1000 records.

Raw upstream fields come through verbatim under jurisdiction_data. Default page size 100, max 1000. Per-country source codes, capabilities and caveats — call list_jurisdictions({jurisdiction:"<code>"}).

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache.
limitNoPage size. List mode: 1-100 (default 20). Detail mode: 1-1000 (default 100) — client-side slice of seznamNotifikaci.
offsetNoPagination skip. Applies to the batches array in list mode, or seznamNotifikaci in detail mode.
sourceNoSource register code. Optional in list mode (if omitted, batches from all sources). REQUIRED in detail mode (when batch_id is provided).
batch_idNoBatch number (cisloDavky). If provided, switches to detail mode and returns the full list of change records. Requires `source`.
jurisdictionYes'CZ' only.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNo
countNo
batchesNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations. Annotations indicate read-only, open-world, idempotent, and non-destructive operations, but the description elaborates on pagination behavior ('Page forward via `offset`/`limit` until the returned array is empty'), jurisdiction limitations ('CZ only'), and response structure details (e.g., 'total_changes', 'pagination', 'Raw upstream fields'). It does not contradict annotations, as it describes a data retrieval tool consistent with read-only hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with the core purpose, followed by mode explanations, operational details, and caveats. Every sentence adds value—e.g., clarifying pagination, jurisdiction limits, and related tool calls—without redundancy. It efficiently covers complex functionality in a compact format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (two modes, jurisdiction constraints, pagination) and lack of an output schema, the description does an excellent job of explaining behavior, limitations, and usage. It covers key aspects like response fields, pagination logic, and error cases (e.g., 501 for non-CZ). A minor gap is the absence of explicit error handling details beyond the 501 note, but overall it's highly complete for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description adds some semantic context by explaining how parameters like `batch_id` and `source` switch between modes and affect behavior, but it doesn't provide significant additional meaning beyond what's in the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Access', 'returns') and resources ('change-batch feed', 'incremental delta batches', 'company ID that changed'). It distinguishes this tool from siblings by focusing on change-notification batches rather than company profiles, documents, or searches, making its scope explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives. It specifies that it's 'Currently only available for CZ; other jurisdictions return 501' and directs users to 'call `list_jurisdictions({jurisdiction:"<code>"})`' for per-country details. It also distinguishes between two modes (list vs. detail) based on parameter usage, offering clear operational context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_establishmentsList an enterprise's establishment units (BE only)A
Read-onlyIdempotent
Inspect

Return every establishment unit (vestigingseenheid / unité d'établissement) attached to a Belgian enterprise number, as exposed by the official KBO Public Search vestiginglijst.html page. Each unit is a physical location (office / shop / warehouse) operated by the enterprise and has its own 10-digit establishment number starting with the digit 2 (e.g. 2.143.775.125). The unit itself is NOT a legal entity — the enterprise is — but the KBO exposes per-unit name, address, start date, activity codes, contact details, and (where applicable) authorisations and entrepreneurial-skill registrations.

Returns an array of { establishment_id, establishment_id_digits, status, start_date, name, address }. Pricing: free. Other jurisdictions return 501.

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNoBypass cache.
company_idYesBelgian Enterprise Number — 10 digits, accepted as '0417.497.106' / '0417497106' / 'BE 0417 497 106'.
jurisdictionYes'BE' only.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNo
countNo
company_idNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNo
establishmentsNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it specifies the data source (KBO Public Search vestiginglijst.html), pricing (free), error handling for other jurisdictions (returns 501), and details about the return structure (array with specific fields). This enriches the agent's understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, then explains what an establishment unit is, details the return structure, and ends with pricing and error handling. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations, and lack of output schema, the description is complete: it explains the purpose, data source, return format, pricing, and error conditions. This provides sufficient context for an agent to use the tool effectively, compensating for the missing output schema with clear return details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters. The description does not add meaning beyond the schema for parameters like 'fresh', 'company_id', or 'jurisdiction'. It implies jurisdiction must be 'BE' but doesn't clarify parameter interactions. Baseline 3 is appropriate since the schema carries the parameter documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Return[s] every establishment unit attached to a Belgian enterprise number' and distinguishes it from siblings by specifying it's for Belgian establishments only (BE only in title) and listing physical location details. It explains what an establishment unit is versus the enterprise legal entity, making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for Belgian enterprises to get establishment units from the KBO Public Search. It implicitly excludes other jurisdictions by stating 'Other jurisdictions return 501.' However, it does not explicitly name alternatives among sibling tools or specify when not to use it beyond jurisdiction limits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_filingsList the filings submitted by a company to its registryA
Read-onlyIdempotent
Inspect

Return a company's filing history. Each filing has a filing_id, filing_date, category, description, and (when upstream exposes one) a document_id that round-trips to get_document_metadata / fetch_document. Raw upstream fields come through verbatim under jurisdiction_data. Results are newest-first.

Use the optional category parameter to filter. Common normalized categories: 'accounts', 'annual-return', 'capital', 'charges', 'confirmation-statement', 'incorporation', 'insolvency', 'liquidation', 'mortgage', 'officers', 'persons-with-significant-control', 'resolution'. Some jurisdictions also accept native form codes directly — pass the upstream code through unchanged if you have one.

Pagination: limit (default 25, max 1000). Some adapters use cursor pagination — pass back next_cursor as cursor to continue. Others use numeric offset. has_document flags whether the body can actually be retrieved via fetch_document; some registries expose only the metadata listing with the body paywalled or unavailable.

Not every registry publishes a filing list; unsupported jurisdictions return 501. Per-country caveats (ID format, accepted category values, cursor vs offset, document availability and pricing, paid-tier gates) — call list_jurisdictions({jurisdiction:"<code>"}).

ParametersJSON Schema
NameRequiredDescriptionDefault
freshNo
limitNoItems per page. Default 25.
cursorNoGB only. Opaque pagination cursor returned by a previous call as 'next_cursor'. Omit for the first page.
offsetNoIE only. Skip the first N filings (pagination). Combine with limit.
categoryNoOptional filter on standardized category. GB: native Companies House category strings. IE: 'accounts'|'annual-return'|'capital'|'charges'|'incorporation'|'insolvency'|'officers'|'prospectus'|'registered-office'|'resolution'. IM: 'annual-return'|'articles'|'memorandum'|'incorporation'|'name-change'|'officers'|'resolution'|'charges' (mapped to upstream AR/AA/MA/INC/CCN/9N/RES/CRS; pass the raw upstream code directly for any other IoM document type). IS: 'annual-return'|'incorporation'|'articles'|'supplementary-notice'|'other', or the Icelandic column names ('Stofngögn'/'Samþykktir'/'Aukatilkynningar'/'Önnur gögn'), or the raw numeric typeid (1/4/5/6/7).
company_idYesRegistry-specific company ID. IE accepts an optional '/B' suffix for the business-name register.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
itemsNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
next_cursorNo
total_countNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds substantial behavioral context beyond what annotations provide. While annotations declare readOnlyHint=true, idempotentHint=true, etc., the description details pagination behavior (limit default 25, max 1000, cursor vs offset pagination), jurisdiction limitations (unsupported jurisdictions return 501), document availability constraints (has_document flag, paywalled bodies), and sorting order (newest-first). This provides crucial operational context that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized for a complex tool with 7 parameters. It's front-loaded with core functionality, then addresses filtering, pagination, and jurisdiction caveats. While comprehensive, every sentence earns its place by providing necessary operational context. Minor deduction for some density, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, jurisdiction variations, pagination differences) and lack of output schema, the description provides excellent completeness. It covers return format, filtering options, pagination mechanisms, jurisdiction limitations, error conditions, and relationships to sibling tools. The guidance to call list_jurisdictions for per-country details appropriately delegates complexity while maintaining completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 86% schema description coverage, the baseline would be 3, but the description adds significant value beyond the schema. It explains the relationship between category filtering and jurisdiction-specific codes, clarifies pagination behavior (cursor vs offset by jurisdiction), and provides context about company_id suffixes and jurisdiction support. The description compensates for the 14% schema coverage gap with practical usage guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Return a company's filing history' with specific details about what each filing contains (filing_id, filing_date, category, description, document_id, jurisdiction_data) and that results are newest-first. It distinguishes from siblings like get_document_metadata and fetch_document by explaining the relationship, and from other list/search tools by focusing specifically on filings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it mentions using the optional 'category' parameter to filter, explains when to use get_document_metadata/fetch_document for document retrieval, and explicitly states that unsupported jurisdictions return 501 with the alternative to call list_jurisdictions for per-country caveats. It also distinguishes from sibling tools by explaining the document_id relationship.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_jurisdictionsPer-country schema reference and tool-support matrixA
Read-onlyIdempotent
Inspect

Per-country reference data dictionary. Two modes — pass EXACTLY ONE of: • jurisdiction: 'GB' — full schema for one country: registry name + URL, data license, company ID format with examples, native status values + mapping to the unified active/inactive/dissolved/unknown enum, list of supported tools, list of field names available in jurisdiction_data sub-objects (profile/filing/officer/shareholder/psc/charge), free-text quirks notes, and the global_search_excluded flag. • supports_tool: 'get_officers' — cross-country matrix for one tool: which jurisdictions implement it (with their registry names) and which don't. Calling with no parameters returns a structured 400 with both shapes documented. For server-level info (codes list, version, rate limits) call about instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
jurisdictionNoISO 3166-1 alpha-2 country code (case-insensitive; CA subdivisions hyphenated like 'CA-BC'). Returns the full per-country schema. Mutually exclusive with `supports_tool`.
supports_toolNoTool name (e.g. 'get_officers', 'get_persons_with_significant_control'). Returns the matrix of which jurisdictions implement this tool. Mutually exclusive with `jurisdiction`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
hintNo
toolNoPopulated in cross-country support-matrix mode: echoes the tool name that was queried.
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNoPopulated in single-country mode: carries the JurisdictionMetadata for the requested country.
supported_inNo
supported_countNo
not_supported_inNo
not_supported_countNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: the two distinct response shapes, the 400 error for no parameters, and the case-insensitive handling of jurisdiction codes. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient and well-structured: first sentence establishes purpose, bullet points clearly explain the two modes, and final sentences cover error cases and sibling differentiation. Every sentence earns its place with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (two distinct modes with different return shapes), the description provides excellent context about what information is returned in each mode. However, without an output schema, the description doesn't fully document the return structure details. The annotations provide good safety coverage, making this mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds some semantic context about what each parameter triggers (full schema vs. cross-country matrix) and provides example values, but doesn't add syntax or format details beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'Per-country reference data dictionary' with two specific modes: full schema for one country or cross-country matrix for one tool. It distinguishes from sibling 'about' by specifying this tool is for jurisdiction-specific reference data while 'about' is for server-level info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: 'pass EXACTLY ONE of' the two parameters, with clear examples of each mode. It explicitly states when NOT to use this tool ('For server-level info... call `about` instead') and provides the consequence of incorrect usage ('Calling with no parameters returns a structured 400').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_addressesStandardise or search Czech RÚIAN addresses (CZ only)A
Read-onlyIdempotent
Inspect

Resolve a free-text or structured Czech address against the RÚIAN (Registr územní identifikace, adres a nemovitostí) register. Returns one or more normalised addresses with full geographic codes (kodAdresnihoMista, kodObce, kodOkresu, kodKraje, PSC). Powered by ARES's POST /standardizovane-adresy/vyhledat.

── Use cases ── • Normalise a messy address string before matching it against other data sources • Resolve an obec/street/house number to its canonical RÚIAN code • Validate whether an address exists in the Czech cadastre

── Standardisation mode ── • UPLNA_STANDARDIZACE (default): require full match (street + house number + locality) • VYHOVUJICI_ADRESY: accept partial matches (e.g. obec only)

At least one of (text, nazev_obce, kod_obce, kod_adresniho_mista) should be supplied. Response contains a stav_standardizace field reporting whether the match was UPLNA, CASTECNA_OBEC, or NEUSPESNA (no match).

Pricing: free. Other jurisdictions return 501.

ParametersJSON Schema
NameRequiredDescriptionDefault
textNoUnstructured address text (e.g. 'Národní 37/38, Nové Město, Praha 1').
freshNoBypass cache.
limitNoPage size (1-100).
offsetNoPagination skip.
kod_obceNoMunicipality code (RÚIAN).
kod_uliceNoStreet code (RÚIAN).
nazev_obceNoMunicipality name (e.g. 'Praha', 'Brno').
nazev_uliceNoStreet / public space name.
jurisdictionYes'CZ' only.
cislo_domovniNoHouse number (number before the slash).
kod_casti_obceNoDistrict code within municipality.
cislo_orientacniNoOrientation number (number after the slash).
nazev_casti_obceNoDistrict name within municipality.
typ_standardizaceNoRequired match type. UPLNA = full strict match, VYHOVUJICI = partial matches allowed.UPLNA_STANDARDIZACE
kod_adresniho_mistaNoAddress-place code (unique identifier in RÚIAN).
kod_mestskeho_obvoduNoCity district code.
nazev_mestskeho_obvoduNoCity district name (Prague / statutory cities).
cislo_orientacni_pismenoNoOrientation number letter suffix (single char).

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNo
countNo
queryNo
addressesNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the API source ('Powered by ARES's POST /standardizovane-adresy/vyhledat'), discloses pricing ('free'), specifies jurisdiction limitations ('Other jurisdictions return 501'), and describes the response structure ('stav_standardizace' field with match statuses). While annotations cover safety (readOnly, non-destructive), the description provides operational details that help the agent understand the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, use cases, standardisation mode, requirements, pricing) and uses bullet points effectively. While slightly longer than minimal, every sentence adds value: the first sentence states the core purpose, followed by organized supplementary information. It could be slightly more concise but remains efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (18 parameters, no output schema), the description provides excellent contextual completeness. It covers purpose, use cases, behavioral traits (pricing, jurisdiction limits), parameter requirements, and match types. With comprehensive annotations and schema coverage, the description fills remaining gaps effectively, making it complete enough for an agent to understand and use the tool appropriately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 18 parameters thoroughly. The description adds minimal parameter semantics beyond the schema: it mentions that 'At least one of (text, nazev_obce, kod_obce, kod_adresniho_mista) should be supplied' and explains the two standardisation modes. This meets the baseline of 3 since the schema does the heavy lifting, but the description provides some additional guidance on parameter combinations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Resolve a free-text or structured Czech address against the RÚIAN register' and 'Returns one or more normalised addresses with full geographic codes.' It specifies the exact resource (Czech RÚIAN addresses) and distinguishes itself from sibling tools by focusing on address standardization rather than company or document searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines through dedicated sections: 'Use cases' lists three specific scenarios (normalization, canonical code resolution, validation), and 'Standardisation mode' explains when to use each match type. It also states jurisdictional limitations ('CZ only') and pricing ('free'), giving clear context for when to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companiesSearch a company registry by name or keywordA
Read-onlyIdempotent
Inspect

Search company registries. Two calling modes — pick EXACTLY ONE per call:

  1. jurisdiction: "GB" — single country, direct query, no confirmation screen. Use when the user has named a specific country.

  2. jurisdictions: ["GB","NO","FR"] — multi-country fan-out when you are unsure. On clients that support MCP elicitation the server asks the user to confirm / edit your list before running; on others it returns an error telling you to ask in chat.

Per-tier caps on how many distinct countries can be in jurisdictions (or searched in a 60-second window via repeated single calls): anonymous/free = 3, pro = 10, max = 30, enterprise = unlimited.

Prefer jurisdiction (singular) when in doubt; ask the user first. The confirmation dialog around jurisdictions is a safety net, not a way to fan out silently. Follow-up tools (get_company_profile, list_filings, get_officers, etc.) do NOT count against the fan-out cap.

Returns candidates with unified top-level fields (jurisdiction, company_id, company_name, status, status_detail, incorporation_date, registered_address) plus a jurisdiction_data object carrying the raw upstream fields verbatim. The status field is a coarse four-value enum (active / inactive / dissolved / unknown) safe for cross-country comparison; status_detail carries the registry's native status string.

Per-country caveats (ID format, accepted input shapes, filter options, paid-tier gates, status taxonomy) are available on demand — call list_jurisdictions({jurisdiction:"<code>"}) for full schema, or list_jurisdictions({supports_tool:"search_companies"}) for the full country-support matrix. All registries are official government sources.

ParametersJSON Schema
NameRequiredDescriptionDefault
icoNoCZ only. One or more exact IČO values to look up.
epciNoFR only. EPCI (intermunicipal grouping) SIREN.
pageNoFI only. Page number (1-indexed). PRH paginates with `page` not offset.
alphaNoIE only. Alphabetic prefix filter on the company name.
freshNoBypass the search cache and call upstream registries directly. Default false.
limitNoMaximum candidates to return (1-250). Default 10. Per-jurisdiction upstream caps: GB 100, NO 1000, AU 200, IE 250.
queryNoCompany name or keyword. May be EMPTY for FR/IE when you're searching purely by structured filters (e.g. code_postal + ca_min for FR; address + alpha for IE). For AU only, also accepts structured filter syntax: space-separated key:value pairs such as 'postcode:2000 type:PUB active:Y' or 'charity:Y state:NSW postcode:2000'. All other jurisdictions use plain name search.
sidloNoCZ only. Registered-office address filter. Full ARES AdresaFiltr schema supported — use structured RÚIAN codes (kodObce/kodUlice/...) when you have them for exact matches, or free-text (obec/ulice/psc/textovaAdresa) for fuzzy searches. Resolve text to codes via the search_addresses tool.
ca_maxNoFR only. Maximum chiffre d'affaires (revenue) in EUR.
ca_minNoFR only. Minimum chiffre d'affaires (revenue) in EUR.
cantonNoCH only. 2-letter canton abbreviation (ZH / BE / GE / VD / VS / TI / ...). Mutually exclusive with registryOfCommerceId or legalSeatId.
czNaceNoCZ only. NACE industry code(s). ARES requires 5-DIGIT or single-LETTER form: 5 digits for sub-class (e.g. '62010' programming), single letter A-U for the section (e.g. 'G' wholesale & retail). 4-digit (class) or 2-digit (division) values are rejected by zod — they would otherwise silently return 0 results from ARES. Filtering by section alone usually exceeds the >1000 cap; combine with another filter.
offsetNoIE / FR only. Skip the first N results (pagination). CRO caps each page at 250; FR at 25. Combine with limit to walk large result sets.
regionNoFR only. 2-digit region code.
addressNoRegistered-address substring filter. IE: free-text substring of the company address. IS: prefix of the 'heimili' field on Skatturinn's /leit form (upstream accepts a single word or street prefix; Icelandic-dative form when searching by street name).
bus_indNoIE only. Which CRO register to search: 'C' = companies (default), 'B' = business names, 'E' = both (slowest).
est_bioNoFR only. Only Agence Bio certified establishments.
est_essNoFR only. Only social/solidarity-economy entities.
est_rgeNoFR only. Only RGE (environmental) certified.
locationNoFI only. Town or city name (any of FI/SV/EN names match).
postCodeNoFI only. 5-digit Finnish postcode.
activeOnlyNoCH only. If true, returns only ACTIVE companies (excludes CANCELLED and BEING_CANCELLED).
match_typeNoIE only. Match strategy for the company_name parameter. 'exact' is fastest (~300ms), 'starts_with' moderate, 'contains' slowest (~3s) — default contains.
vat_numberNoIS only. Icelandic VSK-númer (VAT number). Usually 5–6 digits (e.g. '11459'). Skatturinn's upstream /leit form redirects a VSK hit straight to the company profile; the adapter surfaces this as a single-candidate result.
code_postalNoFR only. 5-digit French postal code (e.g. '75001'). Note: filters companies whose ANY establishment is at this postcode — the company's siège social may be elsewhere. Check `jurisdiction_data.matching_etablissements` to see which establishment matched.
companyFormNoFI only. Company form code: OYJ (public Ltd), OY (private Ltd), KY (limited partnership), AY (partnership), OK (cooperative), SÄÄ (foundation), AOY (housing company), etc.
departementNoFR only. 2-3 digit department code (e.g. '75', '971').
legalFormIdNoCH only. Internal legal-form ID (1-999). Use get_code_description(CH, legalForm) to discover codes. Example: 9 = Aktiengesellschaft (AG).
legalSeatIdNoCH only. BFS commune number of the legal seat. Use get_code_description(CH, community) to discover. Example: 261 = Zurich city. Mutually exclusive with registryOfCommerceId or canton.
pravniFormaNoCZ only. Legal-form code(s) — see PravniForma code-list (e.g. '112'=s.r.o./LLC, '121'=a.s./joint-stock, '100'=sole trader). Use get_code_description for the full list.
code_communeNoFR only. INSEE commune code.
est_qualiopiNoFR only. Only Qualiopi-certified training organisations.
financniUradNoCZ only. Tax-office code(s) — see FinancniUrad code-list.
jurisdictionNoEXACTLY ONE of `jurisdiction` or `jurisdictions` must be provided. Use `jurisdiction` (singular) when the user has clearly specified ONE country — a direct lookup, no confirmation screen shown. ISO 3166-1 alpha-2 country code. Supported values: 'GB', 'NO', 'AU', 'IE', 'FR', 'FI', 'CZ', 'PL', 'CA', 'CA-BC', 'CA-NT', 'BE', 'IM', 'IS', 'CY', 'CH', 'TW', 'LI', 'DE', 'NZ', 'NL', 'MC', 'IT', 'RU'. Use the exact uppercase code (ISO-3166-2 hyphenated form for CA subdivisions).
legalFormUidNoCH only. Public legal-form code per eCH-0097 data standard (4 chars). Example: '0106' = AG, '0108' = Sàrl. Alternative to legalFormId.
nom_personneNoFR only. Surname of a dirigeant or elected official to filter by.
jurisdictionsNoEXACTLY ONE of `jurisdiction` or `jurisdictions` must be provided. Use `jurisdictions` (plural) when you are UNCERTAIN which country the company is in and want to search multiple candidates. Pass an array of 2–N ISO codes representing your best guesses based on company name / domain / user hints. The server will SHOW THE USER your picks in a confirmation dialog (on clients that support it — Claude Desktop, Claude Code, Cursor, and new Gemini CLI) and let them edit before running any search. On clients without that support, the call returns an error telling you to ask the user in chat. Per-tier caps on how many countries can be searched in one call: anonymous/free=3, pro=10, max=30, enterprise=unlimited. If you pass more than the user's cap, the confirmation form will trim to the cap.
type_personneNoFR only. Restrict person filter to officers or elected officials.
pravniFormaRosNoCZ only. Legal-form code(s) from the ROS public-registers source — usually equivalent to pravniForma.
est_associationNoFR only. Only entities registered as associations (RNA).
mainBusinessLineNoFI only. Statistics Finland TOL 2008 industry code (e.g. '6201') or text.
nature_juridiqueNoFR only. Legal-form code (INSEE).
prenoms_personneNoFR only. Given name(s) of a dirigeant/elected official.
resultat_net_maxNoFR only. Maximum résultat net in EUR.
resultat_net_minNoFR only. Minimum résultat net (net profit) in EUR (can be negative).
force_name_searchNoIS only. When true, treats the query as a plain name search even if it happens to be a 10-digit string (otherwise the adapter treats 10 digits as a kennitala direct-lookup).
est_service_publicNoFR only. Only public-service entities.
etat_administratifNoFR only. 'A'=active, 'C'=ceased. Note: the establishment-level 'F'=Fermé state appears in the data but is NOT a valid filter value upstream — use 'C' for closed entities.
activite_principaleNoFR / FR-near-point only. NAF/APE industry code (e.g. '64.20Z').
est_societe_missionNoFR only. Only mission-driven companies.
registrationDateEndNoFI only. Filter companies registered on/before this date (YYYY-MM-DD).
categorie_entrepriseNoFR only. Company size category.
registryOfCommerceIdNoCH only. Internal office number of the cantonal registry of commerce. Use get_code_description(CH, registryOfCommerce) to discover. Example: 20 = Zurich. Mutually exclusive with legalSeatId or canton.
registrationDateStartNoFI only. Filter companies registered on/after this date (YYYY-MM-DD).
include_business_namesNoIE only. Convenience flag — when true, sets bus_ind='E' to search both registers.
est_organisme_formationNoFR only. Only training organisations.
tranche_effectif_salarieNoFR only. INSEE employee-count band code.
date_naissance_personne_maxNoFR only. Maximum birth date of a person (YYYY-MM-DD).
date_naissance_personne_minNoFR only. Minimum birth date of a person (YYYY-MM-DD).
est_entrepreneur_individuelNoFR only. Only sole traders.
section_activite_principaleNoFR only. Single-letter NAF section A-U.
est_collectivite_territorialeNoFR only. Only territorial collectivities.

Output Schema

ParametersJSON Schema
NameRequiredDescription
countNo
queryNo
resultsNoCandidate list (single-country key).
cached_atNo
candidatesNoCandidate list (multi-country fan-out key).
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNoSingle-country mode.
jurisdictionsNoMulti-country fan-out mode.
partial_failuresNo
per_jurisdictionNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, idempotentHint=true, etc., but the description adds crucial behavioral context: per-tier caps on country searches (anonymous=3, pro=10, etc.), confirmation dialogs for multi-jurisdiction mode, error handling for unsupported clients, cache bypass via 'fresh' parameter, and detailed output field explanations (unified fields vs. jurisdiction_data). It also notes that follow-up tools don't count against caps, enhancing operational understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (calling modes, caps, preferences, returns, caveats) and uses bullet-like formatting for readability. It's appropriately detailed for a complex tool but could be slightly more concise by reducing some repetitive explanations (e.g., the confirmation dialog is mentioned multiple times). Every sentence adds value, but minor trimming is possible.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's high complexity (62 parameters, no output schema), the description provides comprehensive context: it explains the two main calling modes, tier-based caps, confirmation behavior, return fields, per-country caveats, and references to other tools for further details. It compensates for the lack of output schema by describing the return structure and status field semantics, making it complete for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the two primary modes (jurisdiction vs. jurisdictions) and their semantics, which aren't fully captured in the schema's individual parameter descriptions. It also hints at parameter interactions (e.g., query may be empty for FR/IE with structured filters) and directs to list_jurisdictions for per-country details, though it doesn't detail all 62 parameters individually.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches company registries by name or keyword, specifying two distinct calling modes (single vs. multi-jurisdiction). It distinguishes itself from siblings like search_companies_near_point by focusing on registry search rather than geographic proximity, and from get_company_profile by being a search rather than a direct lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use each mode: use jurisdiction (singular) when the user names a specific country, and jurisdictions (plural) when uncertain. It warns to prefer the singular mode when in doubt and explicitly mentions follow-up tools (get_company_profile, list_filings, etc.) that don't count against caps, helping differentiate from alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_companies_near_pointSearch FR companies within a radius of a geographic pointA
Read-onlyIdempotent
Inspect

Search the French registry for companies whose siège social (registered office) lies within a given radius (km) of a latitude/longitude point. Maps to recherche-entreprises' /near_point endpoint. Useful for 'companies within 2km of the Eiffel Tower' style queries. Pricing: free. Returns the same UnifiedSearchCandidate shape as search_companies. Other jurisdictions return 501 — only FR exposes this endpoint.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYesLatitude in decimal degrees, e.g. 48.8566 (Paris).
longYesLongitude in decimal degrees, e.g. 2.3522 (Paris).
freshNo
limitNo
radiusNoSearch radius in km, max 50.
jurisdictionYesMust be 'FR' — only FR supports geographic search.
activite_principaleNoOptional NAF code filter.
section_activite_principaleNoOptional single-letter NAF section A-U.

Output Schema

ParametersJSON Schema
NameRequiredDescription
latNo
lngNo
countNo
resultsNo
radius_mNo
candidatesNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable context beyond annotations: pricing information ('Pricing: free'), jurisdiction limitation ('only FR exposes this endpoint'), and error behavior ('Other jurisdictions return 501'). It doesn't contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first defines purpose, second maps to endpoint and gives usage example, third provides pricing and jurisdiction constraints. Every sentence adds essential information, and the structure is front-loaded with core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (8 parameters), rich annotations, and no output schema, the description provides excellent completeness. It covers purpose, usage context, limitations, pricing, endpoint mapping, and return format reference. The combination with annotations makes this fully sufficient for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 75%, providing good parameter documentation. The description adds some semantic context by mentioning 'radius (km)' and 'latitude/longitude point', but doesn't provide additional parameter details beyond what's in the schema. With high schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search the French registry for companies'), target resource ('companies whose siège social lies within a given radius'), and scope ('only FR exposes this endpoint'). It distinguishes from siblings by specifying geographic search capability, unlike general search_companies.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('Useful for companies within 2km of the Eiffel Tower style queries') and when not to use ('Other jurisdictions return 501 — only FR exposes this endpoint'). It also references the sibling tool search_companies for comparison of return format.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_documentFull-text search across a cached document's extracted textA
Read-onlyIdempotent
Inspect

Locate pages containing a phrase. Returns matching page numbers + short context snippets for navigation. Useful when the outline/landmarks don't list your target (e.g. you want 'directors' remuneration' but only 'Directors Report' is a landmark). Up to max_hits pages (default 20) are returned; total_hits counts raw matches across the document.

CRITICAL — snippets are NAVIGATION AIDS ONLY and may contain OCR errors. Once you've identified target pages, call fetch_document_pages(pages=) to read the authoritative text / bytes before citing anything.

Requires get_document_navigation (or fetch_document on a PDF) to have run first so the per-page text index exists in R2.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesPhrase to search for. Case-insensitive.
max_hitsNo
document_idYes
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
queryNo
matchesNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
document_idNo
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable behavioral context beyond annotations: it explains that snippets are 'NAVIGATION AIDS ONLY and may contain OCR errors,' specifies the default and maximum for max_hits, mentions total_hits counting raw matches, and describes the prerequisite of having a per-page text index. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose. Every sentence adds value: the first states what it does, the second explains usage context, the third details limits and total_hits, the fourth warns about snippet reliability and directs to next steps, and the fifth specifies prerequisites. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with navigation aids, OCR caveats, prerequisites) and lack of output schema, the description is complete. It covers purpose, usage guidelines, behavioral traits, parameter hints, and next steps. With annotations providing safety profile and the description filling in operational details, it adequately prepares an agent for correct use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (only 'query' and 'jurisdiction' have descriptions). The description adds some parameter context: it explains that max_hits has a default of 20 and limits returns, and implies query is case-insensitive (though this is also in the schema). However, it doesn't fully compensate for the lack of schema descriptions for document_id and max_hits beyond basic constraints, keeping it at the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Locate pages containing a phrase' with 'Returns matching page numbers + short context snippets for navigation.' It specifically distinguishes itself from sibling tools like fetch_document_pages by explaining its role in navigation versus authoritative text retrieval, and contrasts with outline/landmark-based navigation methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Useful when the outline/landmarks don't list your target' and gives a concrete example. It also specifies prerequisites: 'Requires get_document_navigation (or fetch_document on a PDF) to have run first' and directs to alternatives: 'call fetch_document_pages(pages=<n>) to read the authoritative text / bytes before citing anything.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_officersSearch a registry for company officers (directors, secretaries) by nameA
Read-onlyIdempotent
Inspect

Find people who hold or have held officer positions (director, secretary, member, partner) at companies registered in a jurisdiction, by name. Returns a list of officer candidates each with an officer_id, name, and (where the registry exposes it) the number of appointments held. Use the officer_id in get_officer_appointments to retrieve every company that person has been appointed to. This is the entry point for 'follow the human, not the company' investigations.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYesOfficer name. Full names work best ('John Smith'). Partial names return more candidates.
jurisdictionYesISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNoAdapters returning a bare array are wrapped here by textResult().
countNo
queryNo
officersNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare this as read-only, open-world, idempotent, and non-destructive. The description adds valuable behavioral context beyond annotations: it explains the investigative workflow (using officer_id with get_officer_appointments), describes partial name matching behavior, and clarifies what data is returned (officer_id, name, number of appointments where available). It doesn't mention rate limits or authentication needs, but adds meaningful operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with three sentences that each earn their place: first states the core functionality, second explains the output and next-step workflow, third provides strategic context about investigation patterns. No wasted words, front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations, and lack of output schema, the description provides excellent completeness. It covers purpose, usage patterns, behavioral context, and workflow integration. The annotations handle safety and idempotency, while the description adds investigative context and output interpretation guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 67% schema description coverage, the description adds meaningful context beyond the schema. While the schema documents the parameters technically, the description explains the investigative purpose of the 'query' parameter and the relationship between this search and subsequent lookups using officer_id. It doesn't provide additional syntax details for parameters, but adds strategic context about how parameters fit into workflows.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Find people who hold or have held officer positions'), resource ('registry for company officers'), and scope ('by name'). It explicitly distinguishes this tool from its sibling 'get_officer_appointments' by explaining the relationship between them, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('entry point for "follow the human, not the company" investigations') and when to use an alternative ('Use the officer_id in get_officer_appointments to retrieve every company that person has been appointed to'). It also mentions jurisdictional constraints through the input schema, though not directly in the description text.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_specialised_recordsBulk fetch records from a CZ ARES sub-registerA
Read-onlyIdempotent
Inspect

Fetch raw records from one specific ARES source register in a single call, optionally filtered by a list of IČOs. This is the paired search endpoint for get_specialised_record (which fetches one record at a time). Uses POST /ekonomicke-subjekty-{source}/vyhledat upstream.

── CZ (Czechia ARES) ── Available source codes (same as get_specialised_record, plus 'vr'): • vr — Commercial register (Veřejný rejstřík) — the full company record with officers/PSCs/charges • ros — Public Registers summary (Registr osob) • res — Statistical Register (Registr ekonomických subjektů) • rzp — Trade Licence Register (Registr živnostenského podnikání) • nrpzs — Healthcare Providers (Národní registr poskytovatelů zdravotních služeb) • rpsh — Political Parties (Registr politických stran a hnutí) • rcns — Churches and Religious Societies (Registr církví) • szr — Farmers (Společný zemědělský registr) • rs — Schools (Registr škol) • ceu — Insolvency Record (Centrální evidence úpadců)

The upstream filter is intentionally narrow — ARES only accepts an optional list of IČOs plus pagination on these per-source endpoints. For rich name/address/legal-form search, use search_companies (which queries the main /ekonomicke-subjekty/vyhledat endpoint).

Returns { source, pocet_celkem, count, records[] } with each records[i] preserved verbatim from the upstream source — field set varies per source (refer to the ARES API docs at https://ares.gov.cz/swagger-ui/).

Pricing: free. Other jurisdictions return 501.

ParametersJSON Schema
NameRequiredDescriptionDefault
icoNoOptional IČO (8-digit, or up to 8 digits — auto-padded) or array of IČOs to restrict the search. If omitted, ARES returns the first page of all records in the register (rarely useful — prefer a filter).
freshNoBypass cache.
limitNoPage size (1-100).
offsetNoPagination skip.
sourceYesSource register code. See description for the full list.
jurisdictionYes'CZ' only.

Output Schema

ParametersJSON Schema
NameRequiredDescription
dataNo
countNo
candidatesNo
queried_atYesISO-8601 + Europe/London timezone stamp for when the registry was queried.
record_kindNo
jurisdictionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: pricing (free), jurisdiction limitation (CZ only, others return 501), return format details, field variability per source, and reference to external API docs. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, source codes, usage notes, return format, pricing). While comprehensive, some sentences could be more concise (e.g., the source code list is lengthy but necessary). Overall, it's front-loaded with key information and avoids redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple parameters, source-specific behavior) and rich annotations, the description provides complete context. It covers purpose, usage, source options, limitations, return format, pricing, and external references. No output schema exists, but the description adequately explains the return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 6 parameters. The description adds minimal parameter semantics beyond the schema, mainly by listing source codes and noting that IČO filtering is optional but recommended. It doesn't provide additional syntax or format details for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('fetch'), resource ('raw records'), and scope ('from one specific ARES source register in a single call'), distinguishing it from sibling tools like get_specialised_record (single record) and search_companies (rich search). The title reinforces this with 'Bulk fetch records from a CZ ARES sub-register'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: use this for bulk fetching from specific registers with optional IČO filtering, while preferring search_companies for rich name/address/legal-form searches. The description also clarifies when not to use it (without IČO filter is 'rarely useful') and names the alternative tool (search_companies).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.