openregistry
Server Details
Unmodified live data from 27 national registries. UBO chain walker + 10 MCP prompts.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- sophymarine/openregistry
- GitHub Stars
- 4
- Server Listing
- openregistry
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.6/5 across 10 of 10 tools scored.
Each tool has a clearly distinct purpose: document retrieval, company profile, document metadata, document navigation, officers, shareholders, filings listing, jurisdiction reference, company search, and officer search. There is no overlap or ambiguity.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., fetch_document, get_company_profile, search_companies). No mixed conventions or deviations.
With 10 tools, the server covers a comprehensive set of operations for company registry access without being excessive. Each tool serves a necessary function, and the count is appropriate for the domain.
The tool surface is nearly complete for read-only registry operations, covering company search, profile, filings, documents (with navigation), officers, and shareholders. However, the mention of 'get_charges' in list_filings description but absence as a standalone tool indicates a minor gap.
Available Tools
10 toolsfetch_documentFetch documentARead-onlyIdempotentInspect
Read a filing's content by document_id (from list_filings). Filing metadata alone doesn't answer most questions — the numbers and text live inside the document.
RESPONSE SHAPES:
• kind='embedded' (under max_bytes ≈ 20 MB) — returns full bytes_base64, source_url_official (evergreen registry URL), and source_url_direct (short-TTL signed proxy URL). PDFs render as a document block you can read natively.
• kind='resource_link' (oversized) — NO bytes_base64. Returns reason, next_steps, both source URLs, and index_preview {page_count, text_layer, outline_present}. Use get_document_navigation to locate pages, then re-call this tool with pages='N-M' and format='pdf'|'text'|'png' for the content.
CRITICAL: if this tool fails (rate limit, 5xx, timeout), do NOT fill in names / numbers / dates from memory — tell the user what failed and offer retry or source_url_official. Outline titles, previews, and snippets from navigation tools are for LOCATING pages, never for quoting.
source_url_official is auto-resolved from the most recent list_filings call; the optional company_id / transaction_id / filing_type / filing_description inputs are overrides for the rare case where document_id didn't come through list_filings.
| Name | Required | Description | Default |
|---|---|---|---|
| fresh | No | Bypass R2 cache. Filings are immutable; rarely needed. | |
| format | No | Preferred content type: application/xhtml+xml, application/pdf, application/xml, application/json. Omit to let the adapter pick the most structured option (XHTML > XML > JSON > PDF). | |
| max_bytes | No | Inline-size cutoff. Default ~20 MB. Documents above this return as `kind='resource_link'` — call `get_document_navigation` for them. | |
| company_id | No | Override; auto-resolved from list_filings side-cache. | |
| document_id | Yes | Document ID from list_filings; do not synthesize (composite IDs will 404). | |
| filing_type | No | Override; auto-resolved from list_filings side-cache. | |
| jurisdiction | Yes | ISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`. | |
| transaction_id | No | Override; auto-resolved from list_filings side-cache. | |
| filing_description | No | Override; auto-resolved from list_filings side-cache. |
Output Schema
| Name | Required | Description |
|---|---|---|
| pages | No | |
| queried_at | Yes | ISO-8601 + Europe/London timezone stamp for when the registry was queried. |
| size_bytes | No | |
| source_url | No | |
| document_id | No | |
| bytes_base64 | No | |
| jurisdiction | No | |
| chosen_format | No | |
| available_formats | No | |
| bytes_omitted_reason | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds substantial behavioral context beyond annotations: it explains the two response shapes (embedded vs resource_link), details workflow steps for oversized documents, provides critical rules about citation practices and error handling, describes caching behavior and auto-resolution mechanisms, and explains jurisdiction-specific considerations. While annotations cover basic safety (readOnlyHint, idempotentHint), the description adds rich operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (Response Shapes, Workflow, Critical Rules) and uses bullet points effectively. While comprehensive, it maintains focus on essential information - every sentence serves a clear purpose in guiding tool usage. The front-loaded statement about being the 'Primary tool for reading a filing's content' immediately establishes purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (9 parameters, no output schema), the description provides exceptional completeness: it covers response formats, error handling, workflow integration with sibling tools, jurisdiction considerations, caching behavior, and practical usage constraints. The description fully compensates for the lack of output schema by detailing what the tool returns and how to interpret different response types.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 89% schema description coverage, the baseline would be 3, but the description adds meaningful context about parameter usage: it explains the primary use of document_id (from list_filings/get_financials), clarifies that override parameters are for 'rare use' cases, provides practical guidance on format selection ('recommended — XHTML > XML > JSON > PDF'), and explains the practical implications of max_bytes settings. This adds significant value beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states this is the 'Primary tool for reading a filing's content' and distinguishes it from sibling tools by explaining that filing metadata alone is insufficient - the actual content requires this tool. It clearly identifies the verb (reading/fetching) and resource (filing documents).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides comprehensive usage guidance: it specifies when to use this tool ('MANDATORY for any substantive answer'), explains the workflow for different response types, distinguishes when to use sibling tools like 'fetch_document_pages' and 'get_document_navigation', and explicitly states when NOT to use certain approaches ('Don't reflexively retry with a larger max_bytes').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_company_profileCompany profileARead-onlyIdempotentInspect
Fetch the structured profile of a company by its registry-specific ID. Returns unified top-level fields (company_id, company_name, status, status_detail, incorporation_date, registered_address) plus raw upstream fields under jurisdiction_data. status is a coarse active/inactive/dissolved/unknown enum; status_detail keeps the registry's native string. registered_address is a flat string; the upstream nested form (when present) stays in jurisdiction_data.
Does not bundle officers / shareholders / filings / charges — call those tools separately. ID format varies per registry; pull company_id from search_companies rather than guessing. For per-country ID format and the full jurisdiction_data field catalogue call list_jurisdictions({jurisdiction:'<CC>'}).
| Name | Required | Description | Default |
|---|---|---|---|
| fresh | No | Bypass cache; call upstream directly. | |
| include | No | Optional per-country extra fetches; ignored where unsupported. | |
| company_id | Yes | Registry-specific identifier. Examples: GB '00445790' (8-digit Companies House number, or 'SC123456' for Scotland / 'NI...' / 'OC...' / 'LP...'); NO '923609016' (9-digit); AU 11-digit ABN or 9-digit ACN; FR 9-digit SIREN or 14-digit SIRET; PL 10-digit KRS; CZ 8-digit IČO; FI Y-tunnus '0112038-9'. Call list_jurisdictions for the full per-country format. | |
| jurisdiction | Yes | ISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`. |
Output Schema
| Name | Required | Description |
|---|---|---|
| status | No | Four-value unified status safe for cross-jurisdiction comparison. |
| company_id | No | |
| queried_at | Yes | ISO-8601 + Europe/London timezone stamp for when the registry was queried. |
| company_name | No | |
| jurisdiction | No | |
| status_detail | No | |
| jurisdiction_data | No | Full original response fields from the upstream registry, field names unchanged. Shape is jurisdiction-specific - see `list_jurisdictions({ jurisdiction: '<CODE>' })`. |
| incorporation_date | No | |
| registered_address | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations. While annotations declare read-only, non-destructive, and idempotent operations, the description details the return structure (unified fields plus jurisdiction_data), explains status field semantics, describes caching behavior ('fresh: true bypasses the cache'), and mentions performance implications of optional flags ('slower', 'doubles upstream calls'). No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and efficiently packed with information. It front-loads the core purpose, then details return values, exclusions, input guidance, optional flags, and per-country caveats. While comprehensive, some sentences are lengthy, and the density of information might slightly reduce immediate clarity, though every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, no output schema, rich annotations), the description is largely complete. It covers purpose, usage, return structure, exclusions, input guidance, and jurisdictional nuances. However, without an output schema, it could more explicitly detail the full return format or error conditions, though the annotations provide safety context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds meaningful context beyond the schema: it explains the variability of company_id formats, provides guidance on obtaining company_id from 'search_companies', clarifies that optional flags are jurisdiction-specific and ignored elsewhere, and explains the purpose of the fresh parameter. However, it doesn't fully detail all parameter interactions or edge cases.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fetch the full profile of a company by its registry-specific ID.' It specifies the verb ('fetch'), resource ('company profile'), and key identifier ('registry-specific ID'), and distinguishes it from siblings by explicitly listing what it does NOT include (filings, officers, etc.) and naming alternative tools for those purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. It states what the tool does NOT include and names specific sibling tools for those purposes (e.g., 'list_filings', 'get_officers'). It also advises pulling company_id from 'search_companies' rather than guessing, and directs users to 'list_jurisdictions' for per-country details.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_document_metadataDocument metadataARead-onlyIdempotentInspect
Retrieve metadata for a filing document by document_id (from list_filings). Returns available content formats with byte sizes, page count, source URL, creation date. Raw upstream fields preserved under jurisdiction_data. Call this before fetch_document when a document may be large or its format is unknown.
Do NOT construct or guess document_id — some registries use composite IDs that must come from list_filings. Synthesized IDs will 404. Empty available_formats means the body is paywalled or unavailable upstream. Unsupported jurisdictions return 501.
| Name | Required | Description | Default |
|---|---|---|---|
| fresh | No | Bypass cache. Filings are immutable; rarely needed. | |
| document_id | Yes | Document ID from a previous list_filings call; do not synthesize. | |
| jurisdiction | Yes | ISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`. |
Output Schema
| Name | Required | Description |
|---|---|---|
| pages | No | |
| created_at | No | |
| queried_at | Yes | ISO-8601 + Europe/London timezone stamp for when the registry was queried. |
| source_url | No | |
| document_id | No | |
| jurisdiction | No | |
| available_formats | No | |
| size_bytes_by_format | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it explains error conditions (404/502 for synthesized IDs, 501 for paywalled/unsupported jurisdictions), availability caveats (available_formats may be empty), and the purpose of checking metadata before downloading large documents. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear paragraphs: first states purpose and returns, second provides usage guidance, third gives critical warnings, and fourth covers edge cases. Every sentence adds value without redundancy, and key points are front-loaded (e.g., the warning about not constructing IDs is emphasized).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (metadata retrieval with jurisdiction-specific behaviors), the description is highly complete. It covers purpose, usage, prerequisites, error cases, and relationships to other tools. While there's no output schema, the description details return values (formats, sizes, page count, etc.). The annotations provide safety context, and the description fills in behavioral nuances adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (2 of 3 parameters have descriptions). The description adds meaningful context for document_id beyond the schema's 'Document ID from a previous list_filings call' by warning against constructing IDs and explaining composite ID formats. It also clarifies jurisdiction usage by referencing list_jurisdictions for details. However, it doesn't explicitly address the 'fresh' parameter's semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('retrieve metadata') and resource ('filing document'), distinguishing it from sibling tools like fetch_document (which downloads content) and list_filings (which lists documents). It explicitly mentions what metadata is returned (content formats, byte sizes, page count, source URL, creation date, jurisdiction_data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('call this before fetch_document when the document might be large or you don't yet know the format') and when not to use it ('do NOT construct or guess document_id values'). It names specific alternatives (list_filings for obtaining IDs, fetch_document for downloading, list_jurisdictions for jurisdiction details).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_officersOfficersARead-onlyIdempotentInspect
Return a company's officers — current directors, secretaries, members, partners, board members, procurists, liquidators, plus historical resignations by default. Each officer has a unified shape (officer_id, name, role, appointed_on, resigned_on, is_active) plus raw upstream fields in jurisdiction_data. Role labels pass through in the registry's native language (e.g. Styremedlem, Předseda představenstva, Président); translate client-side. Birth-date precision varies by registry.
Officer-ID stability varies: corporate officers usually carry the corporate's own company_id; natural persons may carry a synthetic index. Some registries mask names under GDPR — that masking is upstream. Jurisdictions without an officer feed return 501.
| Name | Required | Description | Default |
|---|---|---|---|
| fresh | No | Bypass cache; refetch from upstream. | |
| company_id | Yes | Registry company ID, from search_companies. | |
| jurisdiction | Yes | ISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`. | |
| group_by_person | No | CZ only. Dedupe the same person across consecutive appointments (board member → chair → vice-chair) into one entry; appointments list under `jurisdiction_data._appointments[]`. Default false. | |
| include_resigned | No | Include resigned officers. Default true; set false for currently-serving only. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | Adapter returns a bare array; textResult() wraps under `data`. |
| items | No | |
| officers | No | |
| queried_at | Yes | ISO-8601 + Europe/London timezone stamp for when the registry was queried. |
| next_cursor | No | |
| total_count | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds significant behavioral context beyond annotations: it explains GDPR masking, jurisdiction-specific limitations (e.g., birth-date precision, 501 gating), cache bypass with 'fresh: true', and how flags are ignored on unsupported registries. While annotations cover read-only, open-world, and idempotent hints, the description enriches this with practical constraints and data source details without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with core functionality, followed by details on data shape, flags, and caveats. While comprehensive, it remains focused with minimal redundancy, though some sentences could be slightly tightened (e.g., the per-country caveats paragraph is dense but informative).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, no output schema, rich annotations), the description is highly complete: it covers purpose, usage, data format, parameter semantics, jurisdiction-specific behaviors, and links to other tools for further details. It addresses gaps from missing output schema by describing the unified shape of returned officers and potential errors like 501.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With schema description coverage at 60%, the description compensates by explaining parameter implications in detail: it clarifies the default and effect of 'include_resigned', specifies that 'group_by_person' is for CZ only, and notes that 'fresh' bypasses cache. It also adds context for 'jurisdiction' and 'company_id' by linking to other tools for support details, enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return the officers of a company' with specific details about what constitutes an officer (directors, secretaries, members, etc.) and distinguishes it from sibling tools like 'get_officer_appointments' by explaining the relationship between them. It goes beyond a simple list to explain the unified shape of returned data and cross-company tracing capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: it mentions using 'get_officer_appointments' for cross-company tracing with officer_id, and directs users to 'list_jurisdictions' for per-country caveats and support details. It also explains when certain flags are applicable (e.g., 'group_by_person' for CZ only) and when jurisdictions return 501 errors.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_filingsFiling historyARead-onlyIdempotentInspect
Return a company's filing history, newest first. Each filing has filing_id, filing_date, category, description, and (when upstream exposes one) a document_id that round-trips to get_document_metadata / fetch_document. Raw upstream fields preserved under jurisdiction_data.
Filter via the optional category. Common normalized values: 'accounts', 'annual-return', 'capital', 'charges', 'confirmation-statement', 'incorporation', 'insolvency', 'liquidation', 'mortgage', 'officers', 'resolution'. Native upstream form codes also accepted.
This tool returns metadata only — call fetch_document on document_id for the actual filing bytes. has_document=false means the body is paywalled or unavailable upstream. Pagination uses limit (default 25, max 1000) plus cursor (GB) or offset (IE). Unsupported jurisdictions return 501; call list_jurisdictions for per-country category values and pagination style.
| Name | Required | Description | Default |
|---|---|---|---|
| fresh | No | Bypass cache; refetch from upstream. | |
| limit | No | Items per page. Default 25. | |
| cursor | No | Opaque pagination cursor returned as `next_cursor` (GB). Omit for first page. | |
| offset | No | Pagination offset (IE). | |
| category | No | Optional category filter. Use a normalized value or the registry's native form code. Call `list_jurisdictions({jurisdiction:'<CC>'})` for the accepted values per country. | |
| company_id | Yes | Registry-specific company ID. IE accepts an optional '/B' suffix for the business-name register. | |
| jurisdiction | Yes | ISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`. |
Output Schema
| Name | Required | Description |
|---|---|---|
| items | No | |
| queried_at | Yes | ISO-8601 + Europe/London timezone stamp for when the registry was queried. |
| next_cursor | No | |
| total_count | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds substantial behavioral context beyond what annotations provide. While annotations declare readOnlyHint=true, idempotentHint=true, etc., the description details pagination behavior (limit default 25, max 1000, cursor vs offset pagination), jurisdiction limitations (unsupported jurisdictions return 501), document availability constraints (has_document flag, paywalled bodies), and sorting order (newest-first). This provides crucial operational context that annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized for a complex tool with 7 parameters. It's front-loaded with core functionality, then addresses filtering, pagination, and jurisdiction caveats. While comprehensive, every sentence earns its place by providing necessary operational context. Minor deduction for some density, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, jurisdiction variations, pagination differences) and lack of output schema, the description provides excellent completeness. It covers return format, filtering options, pagination mechanisms, jurisdiction limitations, error conditions, and relationships to sibling tools. The guidance to call list_jurisdictions for per-country details appropriately delegates complexity while maintaining completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 86% schema description coverage, the baseline would be 3, but the description adds significant value beyond the schema. It explains the relationship between category filtering and jurisdiction-specific codes, clarifies pagination behavior (cursor vs offset by jurisdiction), and provides context about company_id suffixes and jurisdiction support. The description compensates for the 14% schema coverage gap with practical usage guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return a company's filing history' with specific details about what each filing contains (filing_id, filing_date, category, description, document_id, jurisdiction_data) and that results are newest-first. It distinguishes from siblings like get_document_metadata and fetch_document by explaining the relationship, and from other list/search tools by focusing specifically on filings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool vs alternatives: it mentions using the optional 'category' parameter to filter, explains when to use get_document_metadata/fetch_document for document retrieval, and explicitly states that unsupported jurisdictions return 501 with the alternative to call list_jurisdictions for per-country caveats. It also distinguishes from sibling tools by explaining the document_id relationship.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_jurisdictionsCountry and tool matrixARead-onlyIdempotentInspect
Per-country reference. Pass EXACTLY ONE of:
• jurisdiction='GB' — registry name + URL, data license, company-ID format with examples, native-to-unified status enum mapping, and the list of tools supported.
• supports_tool='get_officers' — which jurisdictions implement a given tool.
| Name | Required | Description | Default |
|---|---|---|---|
| jurisdiction | No | ISO 3166-1 alpha-2 country code (case-insensitive; CA subdivisions hyphenated like 'CA-BC'). Returns the full per-country schema. Mutually exclusive with `supports_tool`. | |
| supports_tool | No | Tool name (e.g. 'get_officers', 'get_shareholders'). Returns the matrix of which jurisdictions implement this tool. Mutually exclusive with `jurisdiction`. |
Output Schema
| Name | Required | Description |
|---|---|---|
| hint | No | |
| tool | No | Populated in cross-country support-matrix mode: echoes the tool name that was queried. |
| queried_at | Yes | ISO-8601 + Europe/London timezone stamp for when the registry was queried. |
| jurisdiction | No | Populated in single-country mode: carries the JurisdictionMetadata for the requested country. |
| supported_in | No | |
| supported_count | No | |
| not_supported_in | No | |
| not_supported_count | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: the two distinct response shapes, the 400 error for no parameters, and the case-insensitive handling of jurisdiction codes. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient and well-structured: first sentence establishes purpose, bullet points clearly explain the two modes, and final sentences cover error cases and sibling differentiation. Every sentence earns its place with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (two distinct modes with different return shapes), the description provides excellent context about what information is returned in each mode. However, without an output schema, the description doesn't fully document the return structure details. The annotations provide good safety coverage, making this mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds some semantic context about what each parameter triggers (full schema vs. cross-country matrix) and provides example values, but doesn't add syntax or format details beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides 'Per-country reference data dictionary' with two specific modes: full schema for one country or cross-country matrix for one tool. It distinguishes from sibling 'about' by specifying this tool is for jurisdiction-specific reference data while 'about' is for server-level info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: 'pass EXACTLY ONE of' the two parameters, with clear examples of each mode. It explicitly states when NOT to use this tool ('For server-level info... call `about` instead') and provides the consequence of incorrect usage ('Calling with no parameters returns a structured 400').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_companiesSearch companiesARead-onlyIdempotentInspect
Search a national company registry by name or keyword. Pass EXACTLY ONE of:
• jurisdiction='GB' - single country, direct.
• jurisdictions=['GB','NO','FR'] - multi-country when you're unsure; the server asks the user to confirm (clients with MCP elicitation) or errors back asking you to ask in chat. Per-tier cap on distinct countries per call: anonymous=3, pro=10, max=30, enterprise=unlimited.
Returns candidates with unified fields (company_id, company_name, status, incorporation_date, registered_address) plus raw upstream jurisdiction_data. For country-specific filters (FR ca_min, CZ czNace, CH canton, etc.) pass the filters object — call list_jurisdictions for the per-country schema.
| Name | Required | Description | Default |
|---|---|---|---|
| fresh | No | Bypass cache; call upstream directly. | |
| limit | No | Max candidates to return (1-250). Default 10. | |
| query | No | Company name or keyword. May be empty for FR / IE when searching purely by structured `filters`. AU also accepts structured `key:value` pairs in this field (e.g. 'postcode:2000 type:PUB active:Y'). | |
| offset | No | Pagination offset (IE / FR). | |
| filters | No | Country-specific advanced filters. Flat object keyed by the upstream field name (e.g. FR `code_postal` / `ca_min`, CZ `czNace`, CH `canton`, FI `companyForm`, IE `alpha`, IS `vat_number`). Call `list_jurisdictions({jurisdiction:'<CC>'})` for the per-country schema. | |
| jurisdiction | No | ISO 3166-1 alpha-2 country code (uppercase; CA subdivisions hyphenated, e.g. 'CA-BC'). Use this when one country is known. Mutually exclusive with `jurisdictions`. | |
| jurisdictions | No | Array of ISO codes when the country is uncertain. The server asks the user to confirm the list (clients with MCP elicitation) or returns an error so you can ask in chat. Mutually exclusive with `jurisdiction`. |
Output Schema
| Name | Required | Description |
|---|---|---|
| count | No | |
| query | No | |
| results | No | Candidate list (single-country key). |
| cached_at | No | |
| candidates | No | Candidate list (multi-country fan-out key). |
| queried_at | Yes | ISO-8601 + Europe/London timezone stamp for when the registry was queried. |
| jurisdiction | No | Single-country mode. |
| jurisdictions | No | Multi-country fan-out mode. |
| partial_failures | No | |
| per_jurisdiction | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, idempotentHint=true, etc., but the description adds crucial behavioral context: per-tier caps on country searches (anonymous=3, pro=10, etc.), confirmation dialogs for multi-jurisdiction mode, error handling for unsupported clients, cache bypass via 'fresh' parameter, and detailed output field explanations (unified fields vs. jurisdiction_data). It also notes that follow-up tools don't count against caps, enhancing operational understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (calling modes, caps, preferences, returns, caveats) and uses bullet-like formatting for readability. It's appropriately detailed for a complex tool but could be slightly more concise by reducing some repetitive explanations (e.g., the confirmation dialog is mentioned multiple times). Every sentence adds value, but minor trimming is possible.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's high complexity (62 parameters, no output schema), the description provides comprehensive context: it explains the two main calling modes, tier-based caps, confirmation behavior, return fields, per-country caveats, and references to other tools for further details. It compensates for the lack of output schema by describing the return structure and status field semantics, making it complete for agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the two primary modes (jurisdiction vs. jurisdictions) and their semantics, which aren't fully captured in the schema's individual parameter descriptions. It also hints at parameter interactions (e.g., query may be empty for FR/IE with structured filters) and directs to list_jurisdictions for per-country details, though it doesn't detail all 62 parameters individually.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches company registries by name or keyword, specifying two distinct calling modes (single vs. multi-jurisdiction). It distinguishes itself from siblings like search_companies_near_point by focusing on registry search rather than geographic proximity, and from get_company_profile by being a search rather than a direct lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use each mode: use jurisdiction (singular) when the user names a specific country, and jurisdictions (plural) when uncertain. It warns to prefer the singular mode when in doubt and explicitly mentions follow-up tools (get_company_profile, list_filings, etc.) that don't count against caps, helping differentiate from alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_officersSearch officers by nameARead-onlyIdempotentInspect
Find people holding or who held officer positions (director, secretary, member, partner) in a jurisdiction's registry by name. Returns candidates with officer_id, name, and (where exposed) appointment count. Entry point for person-centric investigations.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max officer candidates to return. Range 1-100, default 20. | |
| query | Yes | Officer name. Full names work best ('John Smith'). Partial names return more candidates. | |
| jurisdiction | Yes | ISO 3166-1 alpha-2 country code (uppercase). All registries are official government sources. Currently supported: AU, BE, CA, CA-BC, CA-NT, CH, CY, CZ, DE, ES, FI, FR, GB, HK, IE, IM, IS, IT, KR, KY, LI, MC, MX, MY, NL, NO, NZ, PL, RU, TW. Per-country capability, ID format, examples, status mapping, and caveats: call `list_jurisdictions({jurisdiction:'<code>'})`. To find which countries support a specific tool: `list_jurisdictions({supports_tool:'<tool>'})`. |
Output Schema
| Name | Required | Description |
|---|---|---|
| data | No | Adapters returning a bare array are wrapped here by textResult(). |
| count | No | |
| query | No | |
| officers | No | |
| queried_at | Yes | ISO-8601 + Europe/London timezone stamp for when the registry was queried. |
| jurisdiction | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare this as read-only, open-world, idempotent, and non-destructive. The description adds valuable behavioral context beyond annotations: it explains the investigative workflow (using officer_id with get_officer_appointments), describes partial name matching behavior, and clarifies what data is returned (officer_id, name, number of appointments where available). It doesn't mention rate limits or authentication needs, but adds meaningful operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with three sentences that each earn their place: first states the core functionality, second explains the output and next-step workflow, third provides strategic context about investigation patterns. No wasted words, front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations, and lack of output schema, the description provides excellent completeness. It covers purpose, usage patterns, behavioral context, and workflow integration. The annotations handle safety and idempotency, while the description adds investigative context and output interpretation guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 67% schema description coverage, the description adds meaningful context beyond the schema. While the schema documents the parameters technically, the description explains the investigative purpose of the 'query' parameter and the relationship between this search and subsequent lookups using officer_id. It doesn't provide additional syntax details for parameters, but adds strategic context about how parameters fit into workflows.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Find people who hold or have held officer positions'), resource ('registry for company officers'), and scope ('by name'). It explicitly distinguishes this tool from its sibling 'get_officer_appointments' by explaining the relationship between them, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('entry point for "follow the human, not the company" investigations') and when to use an alternative ('Use the officer_id in get_officer_appointments to retrieve every company that person has been appointed to'). It also mentions jurisdictional constraints through the input schema, though not directly in the description text.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.