Skip to main content
Glama

Server Details

14 Japan data tools via MCP (weather/calendar/laws/company). x402 on Base, wallet-free trial.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
MatsushitaTokitsugu/micro-data-api-factory-public
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 14 of 14 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: business calendar tools handle different date operations, company tools cover distinct data types (profile, search, procurement, subsidy), law tools provide different levels of legal information, and weather tools separate current vs. forecast data. The descriptions clearly differentiate what each tool does.

Naming Consistency5/5

Tools follow a consistent prefix_noun pattern throughout: biz_calendar_*, company_*, egov_laws_*, and weather_*. All use snake_case consistently with clear domain prefixes that group related functionality. The naming convention is predictable and well-organized.

Tool Count5/5

14 tools is well-scoped for a micro-data API factory covering four distinct domains (business calendar, company information, government laws, and weather). Each tool earns its place by providing unique functionality within its domain, and the count allows comprehensive coverage without being overwhelming.

Completeness5/5

The tool surface provides complete coverage for each domain: business calendar tools cover listing, checking, and navigating dates; company tools provide profile, search, procurement, and subsidy data; law tools offer metadata, full text, article extraction, and search; weather tools handle both current conditions and forecasts. No obvious gaps exist for the stated purposes.

Available Tools

14 tools
biz_calendar_gotobiAInspect

List Japan Gotobi (5/10/15/20/25/month-end settlement days) for a year-month with previous-business-day adjustments.

ParametersJSON Schema
NameRequiredDescriptionDefault
axisNobank
yearYes
monthYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states what the tool calculates, not how it behaves. It doesn't disclose whether this is a read-only operation, what format the output takes, error conditions, rate limits, or authentication requirements for a financial calendar tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently conveys the essential information with zero waste. Front-loaded with the main action and resource, followed by key details about the calculation method.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with no annotations and no output schema, the description provides adequate purpose but lacks behavioral details and full parameter documentation. It's minimally viable but has clear gaps in explaining the 'axis' parameter and output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by explaining the core parameters' purpose: 'year-month' for the time period and 'previous-business-day adjustments' for the calculation logic. However, it doesn't explain the 'axis' parameter with its enum values (bank/admin/sse), leaving one parameter undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List Japan Gotobi'), identifies the resource (settlement days), and distinguishes from siblings by specifying the unique Gotobi calculation (5/10/15/20/25/month-end) with business-day adjustments. It's more specific than generic calendar tools like 'biz_calendar_holidays'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (financial/business calendar needs for Japan) but doesn't explicitly state when to use this tool versus alternatives like 'biz_calendar_holidays' or 'biz_calendar_next_business_day'. No guidance on prerequisites or exclusions is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

biz_calendar_holidaysBInspect

List all Japan national holidays for a given year (2024-2027).

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states it 'List[s] all Japan national holidays' but does not describe output format, pagination, rate limits, authentication needs, or error handling. For a tool with zero annotation coverage, this is a significant gap in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key information ('List all Japan national holidays for a given year') and adds necessary detail ('(2024-2027)'). There is no wasted text, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects) and lack of annotations or output schema, the description is minimally adequate. It covers the basic purpose and parameter range but lacks details on output format, error handling, and usage context, which are needed for full completeness in the absence of structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning beyond the input schema by specifying the parameter 'year' must be between 2024 and 2027, which aligns with the schema's constraints. Since schema description coverage is 0% and there is only one parameter, the description effectively compensates, providing clear semantics for the parameter's purpose and range.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('Japan national holidays') with a specific scope ('for a given year (2024-2027)'). It distinguishes from some siblings like 'biz_calendar_is_business_day' but not all (e.g., 'biz_calendar_gotobi' is ambiguous). The purpose is specific but could better differentiate from all sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the year range but does not specify use cases, prerequisites, or comparisons to siblings like 'biz_calendar_is_business_day' or 'biz_calendar_next_business_day'. This leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

biz_calendar_is_business_dayAInspect

Check whether a Japan calendar date is a business day on the bank/admin/sse axis. Data: Cabinet Office of Japan public holidays.

ParametersJSON Schema
NameRequiredDescriptionDefault
axisNobank
dateYesYYYY-MM-DD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. While it mentions the data source, it doesn't disclose behavioral traits such as rate limits, error handling, whether the tool caches data, or what happens with invalid dates. For a tool with no annotation coverage, this leaves significant gaps in understanding its operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in a single sentence that front-loads the core purpose and includes essential context about data source. Every word earns its place with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose and data source well, but lacks details about return format (e.g., boolean response), error cases, or axis-specific behaviors that would help an agent use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 50% schema description coverage (only the date parameter has a description), the description adds meaningful context by explaining the axis parameter's purpose ('bank/admin/sse axis') and the overall tool's focus on Japan business days. This compensates well for the schema's partial documentation, though it doesn't detail the specific differences between the axis options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check whether'), resource ('a Japan calendar date'), and scope ('is a business day on the bank/admin/sse axis'), with explicit data source attribution ('Cabinet Office of Japan public holidays'). It distinguishes from siblings like biz_calendar_holidays (which likely lists holidays) and biz_calendar_next_business_day (which finds next business day).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for checking business days in Japan with specific axes, but doesn't explicitly state when to use this tool versus alternatives like biz_calendar_holidays or weather tools. It mentions the axis options but doesn't provide guidance on choosing between bank, admin, or sse axes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

biz_calendar_next_business_dayCInspect

Next Japan business day after a given date on the bank/admin/sse axis (skip N days).

ParametersJSON Schema
NameRequiredDescriptionDefault
axisNobank
dateYes
skipNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'skip N days' and the axis, but fails to explain key behaviors such as how holidays are handled, what constitutes a 'business day' in Japan, whether the tool accounts for weekends, or any rate limits or error conditions. This leaves significant gaps in understanding the tool's operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every part of the sentence contributes to understanding the tool's function, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of business day calculations and the lack of annotations and output schema, the description is incomplete. It does not cover how the tool handles edge cases, what the output looks like, or detailed parameter usage. For a tool with 3 parameters and no structured support, more context is needed to ensure proper use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds some meaning by mentioning 'bank/admin/sse axis' and 'skip N days', which correspond to the 'axis' and 'skip' parameters. However, it does not explain the 'date' parameter format or provide details beyond what is implied, leaving parameters partially undocumented. This meets the baseline for low coverage but doesn't fully compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Next Japan business day after a given date on the bank/admin/sse axis (skip N days).' It specifies the verb ('Next'), resource ('business day'), and scope ('Japan'), but does not explicitly differentiate from sibling tools like 'biz_calendar_is_business_day' or 'biz_calendar_gotobi', which likely serve related but distinct purposes. This makes it clear but not fully sibling-distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the 'bank/admin/sse axis' but does not explain what these axes mean or when to choose one over another. There is no mention of prerequisites, exclusions, or comparisons to sibling tools like 'biz_calendar_is_business_day', leaving users without clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

company_procurementBInspect

Japanese government procurement history for a company by corporate number. For due-diligence and competitive-intel agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
corp_numYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It states what data is retrieved (procurement history) but doesn't describe behavioral traits like whether this is a read-only operation, what format the data returns, potential rate limits, authentication requirements, or error conditions. For a tool with zero annotation coverage, this leaves significant gaps in understanding how it behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (two brief sentences) with zero wasted words. The first sentence states the core functionality, and the second provides usage context. Every element earns its place, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving specialized government data), lack of annotations, and no output schema, the description is insufficiently complete. It doesn't explain what 'procurement history' includes (e.g., time range, detail level), how results are structured, or potential limitations. For due-diligence use cases, more operational context would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context about the single parameter 'corp_num' by specifying it's a 'corporate number' used to identify companies for procurement history lookup. Since schema description coverage is 0% (the schema only shows type/pattern constraints), this semantic clarification is valuable. However, it doesn't explain where to obtain this number or provide examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving Japanese government procurement history for a company using a corporate number. It specifies the resource (procurement history), the key identifier (corporate number), and the context (Japanese government). However, it doesn't explicitly differentiate from sibling tools like 'company_profile' or 'company_search' that might also retrieve company information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('for due-diligence and competitive-intel agents'), suggesting it's for investigative or business intelligence purposes. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'company_profile' (which might give general company info) or 'company_search' (which might find companies by name). No exclusion criteria or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

company_profileBInspect

Company profile by 13-digit Japanese corporate number (hojin-bango). e.g. 1180301018771 = Toyota Motor Corporation.

ParametersJSON Schema
NameRequiredDescriptionDefault
corp_numYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states what the tool does (retrieve a profile) but doesn't describe any behavioral traits: no information about response format, error conditions, rate limits, authentication requirements, or whether it's a read-only operation. The example is helpful but doesn't compensate for these gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one sentence plus an example) with zero wasted words. It's front-loaded with the core purpose and follows with a helpful illustration. Every element earns its place by adding value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (single parameter lookup tool), no annotations, and no output schema, the description is incomplete. It doesn't explain what information the company profile contains, how results are structured, or potential error cases. While concise, it leaves too many behavioral questions unanswered for a tool that presumably returns structured business data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema. The schema only specifies a 13-digit pattern, but the description explains this is a 'Japanese corporate number (hojin-bango)' and provides a concrete example with Toyota's number. This contextualizes what the parameter represents, compensating for the 0% schema description coverage. However, it doesn't explain what happens with invalid numbers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a company profile using a specific identifier (13-digit Japanese corporate number). It provides a concrete example (Toyota Motor Corporation) that illustrates the resource being accessed. However, it doesn't explicitly differentiate from sibling tools like 'company_search' or 'company_procurement'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'company_search' or 'company_procurement'. It doesn't mention prerequisites (e.g., needing a valid corporate number) or contextual constraints. The example helps illustrate the parameter format but doesn't constitute usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

company_subsidyBInspect

Japanese public-subsidy history for a company by corporate number. For due-diligence, funding-analysis, compliance agents.

ParametersJSON Schema
NameRequiredDescriptionDefault
corp_numYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool's purpose but doesn't describe what the return data looks like, whether there are rate limits, authentication requirements, or potential errors. For a tool with no annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, packing purpose, key parameter context, and use cases into a single efficient sentence. Every word earns its place with no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter but no output schema or annotations), the description provides adequate basic information about purpose and parameter meaning. However, it lacks details about return format, error conditions, or behavioral constraints that would be needed for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context about the single parameter (corp_num) by specifying it's a corporate number used to identify companies for subsidy history lookup. With 0% schema description coverage, this compensates well by explaining what the parameter represents, though it doesn't detail the exact 13-digit format that the schema's pattern enforces.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving Japanese public-subsidy history for a company using a corporate number. It specifies the resource (subsidy history) and verb (retrieving), though it doesn't explicitly distinguish from sibling tools like company_profile or company_search. The mention of use cases (due-diligence, funding-analysis, compliance) adds helpful context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the listed use cases (due-diligence, etc.), suggesting when this tool might be appropriate. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like company_profile or company_search, nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

egov_laws_articleAInspect

Extract a specific article from a Japanese law by law_id + article_num.

ParametersJSON Schema
NameRequiredDescriptionDefault
law_idYes
article_numYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the extraction action but lacks details on behavior such as error handling, rate limits, authentication needs, or output format, leaving significant gaps for a tool with no structured safety hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose with no wasted words, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and low schema coverage, the description is minimally adequate for a simple lookup tool but lacks completeness in behavioral details and output expectations, leaving room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for 2 parameters, the description compensates by explaining that 'law_id' and 'article_num' are used to extract a specific article, adding meaningful context beyond the bare schema, though it doesn't detail format or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Extract'), resource ('a specific article from a Japanese law'), and mechanism ('by law_id + article_num'), distinguishing it from sibling tools like 'egov_laws_full' (full law) and 'egov_laws_search' (search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a specific article is needed, but provides no explicit guidance on when to use this tool versus alternatives like 'egov_laws_full' or 'egov_laws_search', nor any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

egov_laws_fullCInspect

Full structured law text (all chapters / sections / articles, JSON).

ParametersJSON Schema
NameRequiredDescriptionDefault
law_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the output is 'JSON' and implies a read operation by retrieving text, but doesn't cover critical aspects like authentication needs, rate limits, error handling, or whether it's idempotent. For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with a single phrase that front-loads key information: it specifies the resource ('law text'), scope ('all chapters / sections / articles'), and format ('JSON'). Every word earns its place without redundancy, making it highly efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving structured legal data), lack of annotations, and no output schema, the description is incomplete. It doesn't explain the JSON structure, potential errors, or usage constraints. For a tool with no structured support, the description should provide more context to guide the agent effectively, but it falls short.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, so the description must compensate. It doesn't explicitly explain the 'law_id' parameter, but the phrase 'Full structured law text' implies it fetches content based on a law identifier, adding some semantic context. With 0 parameters documented in the schema, the description provides minimal but meaningful compensation, warranting a baseline score above 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool retrieves 'Full structured law text (all chapters / sections / articles, JSON)', which clarifies it fetches complete legal content in JSON format. However, it doesn't distinguish itself from sibling tools like 'egov_laws_article' (which likely retrieves specific articles) or 'egov_laws_meta' (which likely provides metadata), leaving the differentiation vague. The purpose is clear but lacks sibling-specific context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention when to choose it over 'egov_laws_article' for specific articles or 'egov_laws_search' for filtered results, nor does it specify prerequisites or exclusions. This absence of usage context leaves the agent without direction for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

egov_laws_metaBInspect

Metadata for a Japanese law (title, kana, category, promulgation/enforcement, revision info; no full text).

ParametersJSON Schema
NameRequiredDescriptionDefault
law_idYese.g. 129AC0000000089 for 民法
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what data is returned but doesn't describe the tool's behavior—such as whether it's a read-only operation, error handling, response format, or any limitations. For a tool with zero annotation coverage, this leaves significant gaps in understanding how it operates.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, using a single sentence that efficiently communicates the tool's scope and limitations. Every word earns its place, with no redundant information, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects) and high schema coverage, the description is adequate for basic understanding. However, with no output schema and no annotations, it lacks details on return values and behavioral traits, leaving room for improvement in completeness for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'law_id' parameter well-documented in the schema. The description adds no additional parameter semantics beyond what the schema provides, such as examples of valid IDs or constraints. With high schema coverage, the baseline score of 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving metadata for a Japanese law, specifying what fields are included (title, kana, category, promulgation/enforcement, revision info) and explicitly excluding full text. It distinguishes from sibling tools like 'egov_laws_article' and 'egov_laws_full' by focusing on metadata only. However, it doesn't explicitly name these siblings for differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'no full text' but doesn't specify when to choose this over 'egov_laws_full' or 'egov_laws_article', nor does it indicate prerequisites or contextual constraints for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

weather_currentAInspect

Current weather (temp, humidity, wind, precipitation, condition) by city name or coords. Japan / global. Data: Open-Meteo (CC BY 4.0). Gateway proxies the free preview; for production pay-per-call ($0.001 via x402 on Base), see _paid_call.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNo
lonNo
cityNoCity name, e.g. Tokyo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing data source (Open-Meteo), licensing (CC BY 4.0), gateway limitations (free preview), and production cost details. However, it doesn't mention rate limits, error conditions, or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero waste - first sentence states purpose and parameters, second covers geographic scope, third discloses data source and licensing, fourth explains gateway limitations and alternatives. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with no annotations and no output schema, the description provides good context about what data is returned (temp, humidity, wind, precipitation, condition), data source, licensing, and usage limitations. The main gap is lack of information about response format or error handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 33% (only 'city' parameter has a description), but the description adds value by explaining the coordinate parameters are for latitude/longitude and that either city name OR coordinates can be used. This partially compensates for the low schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Current weather') and resource (temperature, humidity, wind, precipitation, condition) with geographic scope (Japan/global). It distinguishes from the sibling 'weather_forecast' by specifying current conditions rather than predictions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use it (for current weather data) and mentions an alternative ('_paid_call' for production use), but doesn't explicitly contrast with the sibling 'weather_forecast' tool or explain when to choose city vs coordinates.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

weather_forecastBInspect

Daily weather forecast (1-7 days) by city or coords. Data: Open-Meteo. Preview free; production $0.001/call.

ParametersJSON Schema
NameRequiredDescriptionDefault
latNo
lonNo
cityNo
daysNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the data source (Open-Meteo) and pricing structure ('Preview free; production $0.001/call'), which adds useful context about external dependencies and costs. However, it doesn't describe error conditions, rate limits, authentication requirements, or what the response format looks like. For a tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - just two sentences that pack essential information: what the tool does, input options, data source, and pricing. Every word earns its place with no redundancy or fluff. The information is front-loaded with the core functionality stated first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 0% schema coverage, no annotations, and no output schema, the description is incomplete. While it efficiently states the core purpose and adds useful context about data source and pricing, it doesn't explain parameter usage, relationships between parameters, error handling, response format, or authentication needs. For a tool with this complexity and lack of structured documentation, the description should do more to compensate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for 4 parameters, the description must compensate but provides minimal parameter information. It mentions 'by city or coords' which hints at 'city' and possibly 'lat/lon' parameters, and '1-7 days' hints at the 'days' parameter range. However, it doesn't explain parameter relationships (e.g., city vs coordinates), formats, or constraints beyond what's in the schema. The description adds some meaning but doesn't adequately compensate for the complete lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Daily weather forecast (1-7 days) by city or coords' - specifying the verb (forecast), resource (weather), and scope (daily, 1-7 days). It distinguishes from sibling 'weather_current' by indicating forecast vs current conditions. However, it doesn't explicitly contrast with the sibling tool, keeping it at 4 rather than 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'by city or coords' and mentions data source (Open-Meteo) and pricing details, which suggests when this tool might be appropriate. However, it doesn't provide explicit guidance on when to use this vs 'weather_current' or other alternatives, nor does it mention prerequisites or exclusions. The pricing information hints at cost considerations but isn't framed as usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.