micro-data-api-factory
Server Details
17+ Japan MCP tools (weather/calendar v2/local-pack/enrich). x402 on Base, wallet-free trial.
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- MatsushitaTokitsugu/micro-data-api-factory-public
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 13 of 13 tools scored. Lowest: 3/5.
Each tool has a clearly distinct purpose, grouped by domain (company data, grants, local compliance). Within groups, tools differ by input keys or specific operations, with no overlaps. Descriptions make boundaries explicit.
All tools follow the pattern jp_{domain}_{specific_action}, using consistent snake_case and predictable naming. Group prefixes (data, grants, local) clearly segment functionality.
13 tools is well-scoped for a server covering three specialized areas. Each tool serves a distinct need; none feel redundant or missing for the stated purpose.
The tool set covers core operations for Japanese company enrichment (multiple lookup methods, filings, people), grants (search, detail, eligibility, upcoming), and local compliance (invoice verification, payroll). No obvious gaps for a read-oriented data API.
Available Tools
13 toolsjp_data_company_by_domainAInspect
JP Data Enrich: candidate company lookup from a domain or URL. Uses company_url and brand-alias fallback where necessary; returns confidence labels so agents do not treat domain matches as legal proof. Production x402 price $0.02.
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | Domain or URL, e.g. https://www.bk.mufg.jp/ | |
| verbose | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses the fallback mechanism and confidence labels, indicating the tool's non-authoritative nature and pricing. It adequately conveys read-only behavior without explicit declaration.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences convey purpose, fallback, confidence labels, and pricing with no wasted words. Efficient and front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's core function and fallback, but lacks details on return structure or output format. Given no output schema, more information would be beneficial for agents.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds value for the domain parameter by mentioning fallback, but fails to describe the 'verbose' parameter (schema coverage 50%). No additional semantics are provided beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'candidate company lookup from a domain or URL', specifying the verb, resource, and input type. It distinguishes from siblings like jp_data_company_by_name or jp_data_company_by_houjin_number by focusing on domain input.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context for when to use the tool (enrichment, not legal proof) and mentions fallback behavior. However, it does not explicitly exclude alternatives or state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_data_company_by_houjin_numberAInspect
JP Data Enrich: company profile enrichment by 13-digit Japanese corporate number. Returns gBizINFO-derived profile fields such as address, representative, capital, employees, business summary, and source coverage. Production x402 price $0.02.
| Name | Required | Description | Default |
|---|---|---|---|
| verbose | No | ||
| houjin_number | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the data source (gBizINFO) and returns specific fields, but does not disclose read-only nature (inferred but not stated), rate limits, data freshness, or error handling. The pricing detail is a minor transparency win but insufficient for a score above 3.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences and no redundant information. However, the inclusion of pricing token 'Production x402 price $0.02' may be tangential for tool selection, slightly reducing clarity. Still, it is well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description moderately explains return values with examples (address, representative, etc.) but lacks completeness on edge cases, error responses, or pagination. For a lookup tool with two parameters, this is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must compensate. While houjin_number is self-explanatory from its pattern and name, the verbose parameter is completely undescribed. The description lists example return fields but adds no meaning to the parameters themselves, leaving verbose's effect ambiguous.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: company profile enrichment using a 13-digit Japanese corporate number. It lists specific fields returned, distinguishing it from sibling tools that use different lookup methods (by_domain, by_name). The verb 'enrichment' and resource 'company profile' are specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a houjin_number is available, but lacks explicit guidance on when not to use this tool or when alternatives (e.g., jp_data_company_by_name) are more appropriate. No exclusions or prerequisites are provided, leaving the agent to infer based on the parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_data_company_by_nameAInspect
JP Data Enrich: Japanese company candidate search from name or brand input. Returns corporate-number candidates, confidence labels, ambiguity, and alternative modes. Use as a discovery step, not authoritative KYB verification. Production x402 price $0.02.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| limit | No | ||
| verbose | No | ||
| match_mode | No | ||
| entity_type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description must carry behavioral transparency. It mentions returns confidence labels and ambiguity but lacks details on side effects, error handling, or external API calls. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with purpose. Price information is slightly extraneous but not problematic. Efficient overall.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 5 parameters, no output schema, and no parameter descriptions, the description is insufficient. It omits return structure, how to interpret confidence, and parameter usage details, leaving significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%. The description only references 'name or brand input' but does not explain parameters like limit, verbose, match_mode, or entity_type. Agents get no guidance beyond schema types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for Japanese company candidates from name or brand input, returning corporate-number candidates with confidence labels. This distinguishes it from sibling tools like jp_data_company_by_domain and jp_data_company_by_houjin_number.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises use as a discovery step, not for authoritative KYB verification, providing clear context for when to use this tool. Does not directly mention alternatives but the purpose is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_data_edinet_filingsBInspect
JP Data Enrich: EDINET filing metadata by EDINET code. Returns recent filing IDs, dates, form names, XBRL availability, and document URLs. Production x402 price $0.02.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| verbose | No | ||
| edinet_code | Yes | EDINET code, e.g. E02144 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must reveal behavior. It states it returns metadata, implying a read operation, but doesn't explicitly confirm read-only nature or disclose any potential side effects, latency, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single sentence, including pricing info which aids cost awareness. It's concise but front-loads the key purpose, though price could be considered extraneous.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While it lists return fields, it lacks an output schema and doesn't fully describe the format or all possible return values. Adequate but incomplete for a tool with no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one of three parameters (edinet_code) has a schema description. The tool description does not explain 'limit' or 'verbose' beyond their schema definitions, adding minimal value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns EDINET filing metadata by EDINET code, listing specific fields. It differentiates itself from sibling tools by focusing on EDINET filings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description does not explain prerequisites or scenarios where other tools might be preferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_data_people_by_houjinAInspect
JP Data Enrich: extract public EDINET officer information by corporate number where EDINET mapping is available. Public filing data only; no contact scraping or private-source enrichment. Production x402 price $0.05.
| Name | Required | Description | Default |
|---|---|---|---|
| verbose | No | ||
| houjin_number | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses that the tool uses public filing data only and does not scrape contacts, but does not mention rate limits or error handling for missing mapping.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words, front-loaded with purpose and constraints.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose and limitations adequately for a simple tool, though output format is not described (no output schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so description must compensate. It implies houjin_number is a corporate number but adds no detail beyond schema; verbose parameter is not explained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it extracts public EDINET officer information by corporate number, distinguishing it from sibling tools like jp_data_company_by_houjin_number which focus on company data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Specifies data source (EDINET) and constraints (public data only, no scraping), but lacks explicit comparison to alternatives or when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_grants_detailAInspect
Detailed information for a specific Japanese grant by J-Grants subsidy ID (e.g. id=a0WJ200000CDYBTMA5). Returns ministry, outline, subsidy_rate, max_subsidy_amount, deadline_date, application_method, etc.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | J-Grants subsidy ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the output fields but does not disclose any behavioral traits such as side effects, authentication requirements, or rate limits. The operation is implied to be read-only but not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys purpose, example, and output fields. Every word contributes meaning without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (one simple parameter, no nested objects, no output schema), the description adequately covers what the tool does. It lists the key returned fields, though it could benefit from a brief note on the response structure or format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'id', which is already described as 'J-Grants subsidy ID' in the schema. The description adds a concrete example (subsidy-12345) and lists returned fields, but this adds only marginal semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns detailed information for a specific Japanese grant by subsidy ID. It provides an example ID format and lists specific returned fields (ministry, outline, etc.), distinguishing it from sibling tools like jp_grants_search and jp_grants_eligible.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a specific subsidy ID is available, but it does not explicitly state when to use this tool over alternatives like jp_grants_search or jp_grants_eligible. No exclusion criteria or context for when not to use it is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_grants_eligibleAInspect
Review-priority ranking for Japanese grants by company profile (industry / size / prefecture). Returns ranking_score, nullable eligibility_score, confidence_score, uncertainty_score, unknown_criteria, score_breakdown, recommendation, and match_reasons. eligibility_score is suppressed when upstream requirement data is too sparse. Backed by live J-Grants search plus supplemental public-history proxy rows where available.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| industry | Yes | JSIC Rev.13 major-division code uppercase (A-T) or documented lowercase aliases (canonical English label like manufacturing / ict / healthcare, additional aliases like real-estate, or Japanese label like 製造業 / 情報通信 / 医療福祉). Uppercase / mixed-case English (Manufacturing / ICT / RealEstate) / whitespace insertion / underscore-separated forms are rejected (422 + correction_hints) — pick exactly one of the documented byte-exact forms. | |
| prefecture | No | JIS X 0401 2-digit code (01-47), JP name (東京都), or lowercase Romaji in 4 documented forms: base (tokyo) / base+suffix (tokyoto) / base-suffix (tokyo-to) / base_suffix (tokyo_to). Uppercase / mixed-case / whitespace / arbitrary symbol insertion are rejected (422 + correction_hints) — pick exactly one of the 4 documented lowercase forms. | |
| company_size | Yes | Company size bracket. Canonical values are micro / small / medium, and exact buyer-facing aliases are also accepted: micro=小規模/小規模事業者/個人事業主, small=中小企業/中小企業者/SME/スタートアップ, medium=中堅企業/中規模. Accepted aliases resolve to the canonical value before scoring. | |
| employee_count | No | ||
| deadline_within | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the scoring algorithm, factors, weights, and backend join, indicating a read-only calculation. However, it does not explicitly confirm no side effects, no auth needs, or rate limits, but the nature of scoring implies safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and key details (factors, output). Every sentence adds value with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description outlines return fields (eligibility_score, score_breakdown, etc.) sufficiently. It mentions the backend join, adding context. However, it does not address pagination or sorting behavior implied by the 'limit' parameter, so minor gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 33% (low). The description maps three factors (industry, size, prefecture) to parameters but does not explain limit, employee_count, or deadline_within. It adds meaning for scoring factors but misses opportunities to clarify optional parameters, leaving gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: eligibility scoring for Japanese grants based on company profile. It specifies the scoring factors with weights and the output fields (eligibility_score, score_breakdown, etc.). This distinctly separates it from sibling tools like jp_grants_search or jp_grants_detail.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for scoring eligibility but does not explicitly state when to use this tool versus alternatives. It lacks guidance on when not to use it or which sibling tool to choose instead, leaving the agent to infer from the scoring focus.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_grants_searchAInspect
Multi-source search for Japanese subsidies and grants by keyword, ministry, prefecture, deadline window, industry, company size. Source: Digital Agency J-Grants (PDL 1.0) + METI Mirasapo Plus + e-Stat statistics. Preview free; production $0.05/call.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| keyword | No | Search keyword (Japanese or ASCII) | |
| industry | No | JSIC Rev.13 major-division code uppercase (A-T) or documented lowercase aliases (canonical English label like manufacturing / ict / healthcare, additional aliases like real-estate, or Japanese label like 製造業 / 情報通信 / 医療福祉). Uppercase / mixed-case English (Manufacturing / ICT / RealEstate) / whitespace insertion / underscore-separated forms are rejected (422 + correction_hints) — pick exactly one of the documented byte-exact forms. | |
| ministry | No | Ministry code filter | |
| prefecture | No | JIS X 0401 2-digit code (01-47), JP name (東京都), or lowercase Romaji in 4 documented forms: base (tokyo) / base+suffix (tokyoto) / base-suffix (tokyo-to) / base_suffix (tokyo_to). Uppercase / mixed-case / whitespace / arbitrary symbol insertion are rejected (422 + correction_hints) — pick exactly one of the 4 documented lowercase forms. | |
| company_size | No | Company size bracket. Canonical values are micro / small / medium, and exact buyer-facing aliases are also accepted: micro=小規模/小規模事業者/個人事業主, small=中小企業/中小企業者/SME/スタートアップ, medium=中堅企業/中規模. Accepted aliases resolve to the canonical value before scoring. | |
| deadline_within | No | Grants with deadline within N days |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden. It discloses the pricing model and data sources, but lacks details on rate limits, authentication, pagination, or any side effects. This is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loading the core purpose and filters in the first sentence, and adding source and pricing in the second. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 7 parameters and no output schema or annotations, the description covers all filter dimensions and is fairly complete for a search tool. However, it could mention response format or pagination, but the current level is sufficient for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 71%, so the schema already documents most parameters. The description restates the filter categories but does not add parameter-specific details such as format or behavior beyond what the schema provides. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function as a multi-source search for Japanese subsidies and grants, listing specific filter dimensions (keyword, ministry, prefecture, etc.) and citing data sources. This distinguishes it from sibling tools like jp_grants_detail or jp_grants_upcoming.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Preview free; production $0.05/call' which implies a usage context, but it does not explicitly state when to use this tool over alternatives like jp_grants_detail or jp_grants_upcoming, nor does it provide exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_grants_upcomingBInspect
Japanese grants with deadline approaching within N days (deadline-aware calendar intelligence). Preview free; production $0.05/call.
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Deadline window in days | |
| limit | No | ||
| prefecture | No | JIS X 0401 2-digit code (01-47), JP name (東京都), or lowercase Romaji in 4 documented forms: base (tokyo) / base+suffix (tokyoto) / base-suffix (tokyo-to) / base_suffix (tokyo_to). Uppercase / mixed-case / whitespace / arbitrary symbol insertion are rejected (422 + correction_hints) — pick exactly one of the 4 documented lowercase forms. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses pricing (preview free, production $0.05/call), which is a behavioral cost constraint, but lacks details on data freshness, side effects, or read-only status. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences are concise and front-loaded with purpose and pricing. Every sentence adds value, but could be more structured or include usage hints. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema and no annotations. Description covers purpose and pricing but lacks parameter explanations beyond schema, usage guidelines, and behavioral transparency. Just adequate for a simple query tool with three parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67%, with descriptions for 'days' and 'prefecture' in the schema. The description reinforces 'within N days' but adds no new meaning beyond the schema. Parameter 'limit' lacks description in both schema and description. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists Japanese grants with approaching deadlines, using specific verbs and resource (grants with deadline). The parenthetical 'deadline-aware calendar intelligence' adds specificity and distinguishes it from sibling tools like jp_grants_search and jp_grants_detail.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like jp_grants_search or jp_grants_eligible. The description does not specify exclusions or prerequisites, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_local_houjin_lookup_by_nameAInspect
JP Local Pack: candidate lookup from typed company name to Japanese corporate number. Use for backoffice onboarding and invoice/corporate verification workflows; returns ranked candidates and confidence labels, not legal identity proof. Production x402 price $0.02.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| pref | No | Optional JIS prefecture code filter | |
| limit | No | ||
| verbose | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that results are 'ranked candidates and confidence labels' and that it is not legal proof, but does not cover failure modes, ranking logic, authentication, or rate limits. For a simple lookup, this is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core purpose and followed by use cases and caveats. Every word adds value; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and low parameter coverage, the description partly compensates with use-case context and result nature (ranked candidates, confidence labels), but omits output structure, error handling, and parameter-specific guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is low (25%), and the description only mentions 'typed company name' without elaborating on parameters like 'pref', 'limit', or 'verbose'. It adds minimal meaning beyond the schema's bare definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's verb ('lookup'), resource ('Japanese corporate number'), input ('typed company name'), and use cases ('backoffice onboarding and invoice/corporate verification'), distinguishing it from siblings like jp_data_company_by_name by specifying 'local' and 'candidate' nature.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use ('backoffice onboarding and invoice/corporate verification workflows') and what it does not provide ('not legal identity proof'), but lacks explicit contrast with sibling tools or directions for when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_local_invoice_issuer_by_nameBInspect
JP Local Pack: name-to-T-number candidate lookup plus NTA registration-state verification for each returned candidate. Separates name-resolution confidence from NTA registration status. Production x402 price $0.02.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| pref | No | Optional JIS prefecture code filter | |
| limit | No | ||
| verbose | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It discloses the core behavior (candidate lookup + registration verification) and separates confidence from status, but omits details like read-only nature, rate limits, authentication, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences (27 words), front-loading the key action. It avoids redundancy and includes only essential information, though the price detail is marginally useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, no output schema, and no annotations, the description lacks crucial details: what the output format is, how to interpret confidence vs. registration status, what 'verbose' does, and how limit affects results. The agent may struggle to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 25% (only pref has a description). The description adds context by explaining the overall purpose, which helps interpret 'name' and 'pref', but does not detail each parameter's semantics. It partially compensates for low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it performs 'name-to-T-number candidate lookup plus NTA registration-state verification', separating name-resolution confidence from registration status. This specificity distinguishes it from siblings like jp_local_houjin_lookup_by_name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., jp_local_houjin_lookup_by_name, jp_local_invoice_registration_lookup). The description only mentions it is part of the 'JP Local Pack' but does not provide selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_local_invoice_registration_lookupAInspect
JP Local Pack: verify a Japanese qualified invoice issuer T-number against the NTA public download registry. Input T-number; preview returns registration_status, active flag, name, address, source manifest, and source_last_update_date. Production x402 price $0.005.
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | Qualified invoice issuer number, e.g. T1180301018771 | |
| verbose | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries full burden. It describes the action as verifying against a registry and lists returned fields, but does not disclose authentication needs, rate limits, or potential side effects. The mention of cost ($0.005) is a useful behavioral detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose, second lists return fields and cost. No wasted words, front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with 2 params and no output schema, the description provides sufficient context: purpose, return fields, and cost. Missing explanation of the 'verbose' parameter, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (only 'number' has description). Description adds value by explaining what the tool returns, but does not explain the 'verbose' parameter. It partly compensates for schema gaps but not fully.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool verifies a Japanese qualified invoice issuer T-number against a specific registry (NTA public download registry). The verb 'verify' and resource are explicit, and it distinguishes from sibling jp_local_invoice_issuer_by_name (which searches by name).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage when you have a T-number to check, but does not explicitly state when to use vs alternatives (e.g., jp_local_invoice_issuer_by_name) or provide any 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jp_local_payroll_summaryAInspect
JP Local Pack: month-level Japan payroll summary for AI backoffice agents. Input monthly salary, target month, age, dependents, and prefecture; preview returns payroll deductions, social insurance, holidays, business days, tax-rate context, source URLs, and explicit scope limits. Production x402 price $0.015.
| Name | Required | Description | Default |
|---|---|---|---|
| age | No | ||
| salary | No | Alias for monthly_salary | |
| verbose | No | ||
| pref_code | No | JIS prefecture code, e.g. 13=Tokyo | |
| dependents | No | ||
| prefecture | No | Alias for pref_code | |
| year_month | No | 2026-04 | |
| monthly_salary | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It lists return contents (deductions, insurance, holidays, etc.) and mentions scope limits and price. However, it does not state whether the tool is read-only or if it has side effects, though payroll calculation is inherently read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, concise and front-loaded with the core purpose. It includes price information, which is useful but not essential. No wasted words, though the structure could be slightly tighter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains return values (payroll deductions, social insurance, holidays, etc.). It covers key outputs and limits. However, it could be more precise about parameter input formats (e.g., prefecture code vs. name). Overall sufficient for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (38%), and the description lists five of eight parameters (monthly salary, target month, age, dependents, prefecture) but adds minimal detail beyond their existence. The description does not explain parameter formats or constraints (e.g., pref_code format). Baseline 3 is appropriate since the description provides a list but lacks depth.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides a month-level Japan payroll summary for AI backoffice agents, specifying inputs and outputs. However, it does not explicitly distinguish itself from sibling tools, though siblings are in unrelated domains (weather, company, grants, etc.), so the purpose is still clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for payroll computation but provides no explicit guidance on when to use versus alternatives or when not to use. It mentions 'for AI backoffice agents' as context but no exclusions or situational advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!