Skip to main content
Glama

Server Details

17+ Japan MCP tools (weather/calendar v2/local-pack/enrich). x402 on Base, wallet-free trial.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
MatsushitaTokitsugu/micro-data-api-factory-public
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 13 of 13 tools scored. Lowest: 3/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose, grouped by domain (company data, grants, local compliance). Within groups, tools differ by input keys or specific operations, with no overlaps. Descriptions make boundaries explicit.

Naming Consistency5/5

All tools follow the pattern jp_{domain}_{specific_action}, using consistent snake_case and predictable naming. Group prefixes (data, grants, local) clearly segment functionality.

Tool Count5/5

13 tools is well-scoped for a server covering three specialized areas. Each tool serves a distinct need; none feel redundant or missing for the stated purpose.

Completeness5/5

The tool set covers core operations for Japanese company enrichment (multiple lookup methods, filings, people), grants (search, detail, eligibility, upcoming), and local compliance (invoice verification, payroll). No obvious gaps for a read-oriented data API.

Available Tools

13 tools
jp_data_company_by_domainAInspect

JP Data Enrich: candidate company lookup from a domain or URL. Uses company_url and brand-alias fallback where necessary; returns confidence labels so agents do not treat domain matches as legal proof. Production x402 price $0.02.

ParametersJSON Schema
NameRequiredDescriptionDefault
domainYesDomain or URL, e.g. https://www.bk.mufg.jp/
verboseNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses the fallback mechanism and confidence labels, indicating the tool's non-authoritative nature and pricing. It adequately conveys read-only behavior without explicit declaration.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences convey purpose, fallback, confidence labels, and pricing with no wasted words. Efficient and front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's core function and fallback, but lacks details on return structure or output format. Given no output schema, more information would be beneficial for agents.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds value for the domain parameter by mentioning fallback, but fails to describe the 'verbose' parameter (schema coverage 50%). No additional semantics are provided beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'candidate company lookup from a domain or URL', specifying the verb, resource, and input type. It distinguishes from siblings like jp_data_company_by_name or jp_data_company_by_houjin_number by focusing on domain input.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context for when to use the tool (enrichment, not legal proof) and mentions fallback behavior. However, it does not explicitly exclude alternatives or state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_data_company_by_houjin_numberAInspect

JP Data Enrich: company profile enrichment by 13-digit Japanese corporate number. Returns gBizINFO-derived profile fields such as address, representative, capital, employees, business summary, and source coverage. Production x402 price $0.02.

ParametersJSON Schema
NameRequiredDescriptionDefault
verboseNo
houjin_numberYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the data source (gBizINFO) and returns specific fields, but does not disclose read-only nature (inferred but not stated), rate limits, data freshness, or error handling. The pricing detail is a minor transparency win but insufficient for a score above 3.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences and no redundant information. However, the inclusion of pricing token 'Production x402 price $0.02' may be tangential for tool selection, slightly reducing clarity. Still, it is well-structured and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description moderately explains return values with examples (address, representative, etc.) but lacks completeness on edge cases, error responses, or pagination. For a lookup tool with two parameters, this is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. While houjin_number is self-explanatory from its pattern and name, the verbose parameter is completely undescribed. The description lists example return fields but adds no meaning to the parameters themselves, leaving verbose's effect ambiguous.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: company profile enrichment using a 13-digit Japanese corporate number. It lists specific fields returned, distinguishing it from sibling tools that use different lookup methods (by_domain, by_name). The verb 'enrichment' and resource 'company profile' are specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a houjin_number is available, but lacks explicit guidance on when not to use this tool or when alternatives (e.g., jp_data_company_by_name) are more appropriate. No exclusions or prerequisites are provided, leaving the agent to infer based on the parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_data_company_by_nameAInspect

JP Data Enrich: Japanese company candidate search from name or brand input. Returns corporate-number candidates, confidence labels, ambiguity, and alternative modes. Use as a discovery step, not authoritative KYB verification. Production x402 price $0.02.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
limitNo
verboseNo
match_modeNo
entity_typeNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description must carry behavioral transparency. It mentions returns confidence labels and ambiguity but lacks details on side effects, error handling, or external API calls. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose. Price information is slightly extraneous but not problematic. Efficient overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 5 parameters, no output schema, and no parameter descriptions, the description is insufficient. It omits return structure, how to interpret confidence, and parameter usage details, leaving significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%. The description only references 'name or brand input' but does not explain parameters like limit, verbose, match_mode, or entity_type. Agents get no guidance beyond schema types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for Japanese company candidates from name or brand input, returning corporate-number candidates with confidence labels. This distinguishes it from sibling tools like jp_data_company_by_domain and jp_data_company_by_houjin_number.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises use as a discovery step, not for authoritative KYB verification, providing clear context for when to use this tool. Does not directly mention alternatives but the purpose is well-defined.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_data_edinet_filingsBInspect

JP Data Enrich: EDINET filing metadata by EDINET code. Returns recent filing IDs, dates, form names, XBRL availability, and document URLs. Production x402 price $0.02.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
verboseNo
edinet_codeYesEDINET code, e.g. E02144
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must reveal behavior. It states it returns metadata, implying a read operation, but doesn't explicitly confirm read-only nature or disclose any potential side effects, latency, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is a single sentence, including pricing info which aids cost awareness. It's concise but front-loads the key purpose, though price could be considered extraneous.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While it lists return fields, it lacks an output schema and doesn't fully describe the format or all possible return values. Adequate but incomplete for a tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one of three parameters (edinet_code) has a schema description. The tool description does not explain 'limit' or 'verbose' beyond their schema definitions, adding minimal value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns EDINET filing metadata by EDINET code, listing specific fields. It differentiates itself from sibling tools by focusing on EDINET filings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not explain prerequisites or scenarios where other tools might be preferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_data_people_by_houjinAInspect

JP Data Enrich: extract public EDINET officer information by corporate number where EDINET mapping is available. Public filing data only; no contact scraping or private-source enrichment. Production x402 price $0.05.

ParametersJSON Schema
NameRequiredDescriptionDefault
verboseNo
houjin_numberYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses that the tool uses public filing data only and does not scrape contacts, but does not mention rate limits or error handling for missing mapping.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words, front-loaded with purpose and constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose and limitations adequately for a simple tool, though output format is not described (no output schema).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so description must compensate. It implies houjin_number is a corporate number but adds no detail beyond schema; verbose parameter is not explained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it extracts public EDINET officer information by corporate number, distinguishing it from sibling tools like jp_data_company_by_houjin_number which focus on company data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Specifies data source (EDINET) and constraints (public data only, no scraping), but lacks explicit comparison to alternatives or when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_grants_detailAInspect

Detailed information for a specific Japanese grant by J-Grants subsidy ID (e.g. id=a0WJ200000CDYBTMA5). Returns ministry, outline, subsidy_rate, max_subsidy_amount, deadline_date, application_method, etc.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesJ-Grants subsidy ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the output fields but does not disclose any behavioral traits such as side effects, authentication requirements, or rate limits. The operation is implied to be read-only but not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys purpose, example, and output fields. Every word contributes meaning without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (one simple parameter, no nested objects, no output schema), the description adequately covers what the tool does. It lists the key returned fields, though it could benefit from a brief note on the response structure or format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'id', which is already described as 'J-Grants subsidy ID' in the schema. The description adds a concrete example (subsidy-12345) and lists returned fields, but this adds only marginal semantic value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns detailed information for a specific Japanese grant by subsidy ID. It provides an example ID format and lists specific returned fields (ministry, outline, etc.), distinguishing it from sibling tools like jp_grants_search and jp_grants_eligible.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a specific subsidy ID is available, but it does not explicitly state when to use this tool over alternatives like jp_grants_search or jp_grants_eligible. No exclusion criteria or context for when not to use it is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_grants_eligibleAInspect

Review-priority ranking for Japanese grants by company profile (industry / size / prefecture). Returns ranking_score, nullable eligibility_score, confidence_score, uncertainty_score, unknown_criteria, score_breakdown, recommendation, and match_reasons. eligibility_score is suppressed when upstream requirement data is too sparse. Backed by live J-Grants search plus supplemental public-history proxy rows where available.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
industryYesJSIC Rev.13 major-division code uppercase (A-T) or documented lowercase aliases (canonical English label like manufacturing / ict / healthcare, additional aliases like real-estate, or Japanese label like 製造業 / 情報通信 / 医療福祉). Uppercase / mixed-case English (Manufacturing / ICT / RealEstate) / whitespace insertion / underscore-separated forms are rejected (422 + correction_hints) — pick exactly one of the documented byte-exact forms.
prefectureNoJIS X 0401 2-digit code (01-47), JP name (東京都), or lowercase Romaji in 4 documented forms: base (tokyo) / base+suffix (tokyoto) / base-suffix (tokyo-to) / base_suffix (tokyo_to). Uppercase / mixed-case / whitespace / arbitrary symbol insertion are rejected (422 + correction_hints) — pick exactly one of the 4 documented lowercase forms.
company_sizeYesCompany size bracket. Canonical values are micro / small / medium, and exact buyer-facing aliases are also accepted: micro=小規模/小規模事業者/個人事業主, small=中小企業/中小企業者/SME/スタートアップ, medium=中堅企業/中規模. Accepted aliases resolve to the canonical value before scoring.
employee_countNo
deadline_withinNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses the scoring algorithm, factors, weights, and backend join, indicating a read-only calculation. However, it does not explicitly confirm no side effects, no auth needs, or rate limits, but the nature of scoring implies safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose and key details (factors, output). Every sentence adds value with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description outlines return fields (eligibility_score, score_breakdown, etc.) sufficiently. It mentions the backend join, adding context. However, it does not address pagination or sorting behavior implied by the 'limit' parameter, so minor gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 33% (low). The description maps three factors (industry, size, prefecture) to parameters but does not explain limit, employee_count, or deadline_within. It adds meaning for scoring factors but misses opportunities to clarify optional parameters, leaving gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: eligibility scoring for Japanese grants based on company profile. It specifies the scoring factors with weights and the output fields (eligibility_score, score_breakdown, etc.). This distinctly separates it from sibling tools like jp_grants_search or jp_grants_detail.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for scoring eligibility but does not explicitly state when to use this tool versus alternatives. It lacks guidance on when not to use it or which sibling tool to choose instead, leaving the agent to infer from the scoring focus.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_grants_upcomingBInspect

Japanese grants with deadline approaching within N days (deadline-aware calendar intelligence). Preview free; production $0.05/call.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoDeadline window in days
limitNo
prefectureNoJIS X 0401 2-digit code (01-47), JP name (東京都), or lowercase Romaji in 4 documented forms: base (tokyo) / base+suffix (tokyoto) / base-suffix (tokyo-to) / base_suffix (tokyo_to). Uppercase / mixed-case / whitespace / arbitrary symbol insertion are rejected (422 + correction_hints) — pick exactly one of the 4 documented lowercase forms.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses pricing (preview free, production $0.05/call), which is a behavioral cost constraint, but lacks details on data freshness, side effects, or read-only status. Adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences are concise and front-loaded with purpose and pricing. Every sentence adds value, but could be more structured or include usage hints. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and no annotations. Description covers purpose and pricing but lacks parameter explanations beyond schema, usage guidelines, and behavioral transparency. Just adequate for a simple query tool with three parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 67%, with descriptions for 'days' and 'prefecture' in the schema. The description reinforces 'within N days' but adds no new meaning beyond the schema. Parameter 'limit' lacks description in both schema and description. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists Japanese grants with approaching deadlines, using specific verbs and resource (grants with deadline). The parenthetical 'deadline-aware calendar intelligence' adds specificity and distinguishes it from sibling tools like jp_grants_search and jp_grants_detail.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like jp_grants_search or jp_grants_eligible. The description does not specify exclusions or prerequisites, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_local_houjin_lookup_by_nameAInspect

JP Local Pack: candidate lookup from typed company name to Japanese corporate number. Use for backoffice onboarding and invoice/corporate verification workflows; returns ranked candidates and confidence labels, not legal identity proof. Production x402 price $0.02.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
prefNoOptional JIS prefecture code filter
limitNo
verboseNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that results are 'ranked candidates and confidence labels' and that it is not legal proof, but does not cover failure modes, ranking logic, authentication, or rate limits. For a simple lookup, this is adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core purpose and followed by use cases and caveats. Every word adds value; no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and low parameter coverage, the description partly compensates with use-case context and result nature (ranked candidates, confidence labels), but omits output structure, error handling, and parameter-specific guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is low (25%), and the description only mentions 'typed company name' without elaborating on parameters like 'pref', 'limit', or 'verbose'. It adds minimal meaning beyond the schema's bare definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's verb ('lookup'), resource ('Japanese corporate number'), input ('typed company name'), and use cases ('backoffice onboarding and invoice/corporate verification'), distinguishing it from siblings like jp_data_company_by_name by specifying 'local' and 'candidate' nature.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use ('backoffice onboarding and invoice/corporate verification workflows') and what it does not provide ('not legal identity proof'), but lacks explicit contrast with sibling tools or directions for when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_local_invoice_issuer_by_nameBInspect

JP Local Pack: name-to-T-number candidate lookup plus NTA registration-state verification for each returned candidate. Separates name-resolution confidence from NTA registration status. Production x402 price $0.02.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
prefNoOptional JIS prefecture code filter
limitNo
verboseNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full burden. It discloses the core behavior (candidate lookup + registration verification) and separates confidence from status, but omits details like read-only nature, rate limits, authentication, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences (27 words), front-loading the key action. It avoids redundancy and includes only essential information, though the price detail is marginally useful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the description lacks crucial details: what the output format is, how to interpret confidence vs. registration status, what 'verbose' does, and how limit affects results. The agent may struggle to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 25% (only pref has a description). The description adds context by explaining the overall purpose, which helps interpret 'name' and 'pref', but does not detail each parameter's semantics. It partially compensates for low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it performs 'name-to-T-number candidate lookup plus NTA registration-state verification', separating name-resolution confidence from registration status. This specificity distinguishes it from siblings like jp_local_houjin_lookup_by_name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives (e.g., jp_local_houjin_lookup_by_name, jp_local_invoice_registration_lookup). The description only mentions it is part of the 'JP Local Pack' but does not provide selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_local_invoice_registration_lookupAInspect

JP Local Pack: verify a Japanese qualified invoice issuer T-number against the NTA public download registry. Input T-number; preview returns registration_status, active flag, name, address, source manifest, and source_last_update_date. Production x402 price $0.005.

ParametersJSON Schema
NameRequiredDescriptionDefault
numberYesQualified invoice issuer number, e.g. T1180301018771
verboseNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so description carries full burden. It describes the action as verifying against a registry and lists returned fields, but does not disclose authentication needs, rate limits, or potential side effects. The mention of cost ($0.005) is a useful behavioral detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose, second lists return fields and cost. No wasted words, front-loaded with purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with 2 params and no output schema, the description provides sufficient context: purpose, return fields, and cost. Missing explanation of the 'verbose' parameter, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (only 'number' has description). Description adds value by explaining what the tool returns, but does not explain the 'verbose' parameter. It partly compensates for schema gaps but not fully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool verifies a Japanese qualified invoice issuer T-number against a specific registry (NTA public download registry). The verb 'verify' and resource are explicit, and it distinguishes from sibling jp_local_invoice_issuer_by_name (which searches by name).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage when you have a T-number to check, but does not explicitly state when to use vs alternatives (e.g., jp_local_invoice_issuer_by_name) or provide any 'when not to use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

jp_local_payroll_summaryAInspect

JP Local Pack: month-level Japan payroll summary for AI backoffice agents. Input monthly salary, target month, age, dependents, and prefecture; preview returns payroll deductions, social insurance, holidays, business days, tax-rate context, source URLs, and explicit scope limits. Production x402 price $0.015.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageNo
salaryNoAlias for monthly_salary
verboseNo
pref_codeNoJIS prefecture code, e.g. 13=Tokyo
dependentsNo
prefectureNoAlias for pref_code
year_monthNo2026-04
monthly_salaryYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It lists return contents (deductions, insurance, holidays, etc.) and mentions scope limits and price. However, it does not state whether the tool is read-only or if it has side effects, though payroll calculation is inherently read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, concise and front-loaded with the core purpose. It includes price information, which is useful but not essential. No wasted words, though the structure could be slightly tighter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains return values (payroll deductions, social insurance, holidays, etc.). It covers key outputs and limits. However, it could be more precise about parameter input formats (e.g., prefecture code vs. name). Overall sufficient for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (38%), and the description lists five of eight parameters (monthly salary, target month, age, dependents, prefecture) but adds minimal detail beyond their existence. The description does not explain parameter formats or constraints (e.g., pref_code format). Baseline 3 is appropriate since the description provides a list but lacks depth.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides a month-level Japan payroll summary for AI backoffice agents, specifying inputs and outputs. However, it does not explicitly distinguish itself from sibling tools, though siblings are in unrelated domains (weather, company, grants, etc.), so the purpose is still clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for payroll computation but provides no explicit guidance on when to use versus alternatives or when not to use. It mentions 'for AI backoffice agents' as context but no exclusions or situational advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.