Skip to main content
Glama

Server Details

Read-only MCP server for searching Japan government procurement bid information from the KKJ portal.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
sugukurukabe/koko-call-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 4 of 4 tools scored. Lowest: 3.2/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get single detail, list recent, search by multiple criteria, and summarize by organization. No overlap or ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with underscores (e.g., get_bid_detail, list_recent_bids, search_bids, summarize_bids_by_org), making them predictable.

Tool Count5/5

Four tools is well-scoped for a specialized read-only bidding information server, covering essential retrieval and analysis operations without being too few or excessive.

Completeness5/5

The tool surface covers the core operations for the domain: single detail retrieval, recent listing, flexible searching, and organizational summary. No obvious gaps for its stated purpose.

Available Tools

14 tools
analyze_past_awards過去公告・競合レーダーA
Read-onlyIdempotent
Inspect

過去公告を集計し発注機関の頻度・カテゴリ偏り・月次トレンドを返す。公告履歴から競合パターンを推定する。Analyze past bid announcements to surface procurement patterns, category trends, and notable organizations. Analisis pengumuman tender masa lalu untuk menemukan pola pengadaan dan tren kategori.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoKKJ API から取得する公告件数の上限。
queryNo自由記述キーワード。例: システム、保守、クラウド。
categoryNo
prefectureNo
window_daysNo過去何日分の公告を集計対象にするか。デフォルト365日。
procedure_typeNo
organization_nameNo発注機関名で絞り込む場合に指定。

Output Schema

ParametersJSON Schema
NameRequiredDescription
queryYes
caveatsYes
insightsYes
totalHitsYes
windowDaysYes
attributionYes
monthlyVolumeYes
returnedCountYes
topOrganizationsYes
categoryBreakdownYes
procedureBreakdownYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, openWorld, idempotent, non-destructive. Description adds critical context: does not handle award data, focuses on announcement history, and outlines return structure (frequency, bias, trends, candidates, cautions). Complements annotations without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence front-loads the action and lists key outputs. No extraneous words; every clause adds value. Efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 7 parameters (some enums) and an output schema, the description provides a good high-level overview of functionality and outputs. Could be improved by linking parameters to behavior, but overall adequate for an MVP tool given annotations and complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 57%, but description does not elaborate on parameters. It mentions 'condition matching' generically but does not explain how limit, query, window_days, etc., affect behavior. Minimal added meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb (集計する), resource (過去公告), and specific outputs (頻度、偏り、トレンド、注目候補). Distinguishes from siblings by explicitly excluding award data and focusing on announcement history for competition/ordering pattern estimation. Also notes MVP status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for analyzing past announcements to infer patterns, but does not specify when to use this tool versus alternatives like search_bids or get_bid_detail. No explicit when-not or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

assess_bid_qualification入札資格適合MVP判定A
Read-onlyIdempotent
Inspect

1件の入札について自社の地域・カテゴリ・資格と照合し参加可否をMVP判定する。Check bid participation eligibility against your company's region, category, and certifications. Periksa kelayakan mengikuti tender berdasarkan wilayah, kategori, dan sertifikasi perusahaan.

ParametersJSON Schema
NameRequiredDescriptionDefault
bid_keyYessearch_bids、rank_bids、またはlist_recent_bidsが返したKeyフィールド
target_urisNo抽出対象URL。省略時は検索結果の公式公告ページ・添付資料URLから最大3件を使う。
certificationsNo自社が持つ資格・等級・営業品目。例: A、役務の提供等、情報処理。
fetch_documentsNotrueの場合、PDF/HTML抽出結果を資格判定に反映する。
service_keywordsNo自社サービスに関係する語句。例: システム、保守、クラウド。
qualified_categoriesNo自社が対応できるカテゴリ。例: 役務、物品。
qualified_prefecturesNo自社が対応できる都道府県。例: 鹿児島県、宮崎県。

Output Schema

ParametersJSON Schema
NameRequiredDescription
bidYes
gapsYes
statusYes
matchesYes
profileYes
unknownsYes
confidenceYes
attributionYes
nextActionsYes
requirementsUsedNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, openWorld, idempotent, and non-destructive hints. The description adds behavioral context (MVP judgment, internal use) beyond annotations, aiding the agent's understanding without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words: first defines function, second adds context. Perfectly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, output schema exists), the description covers the core purpose and context. It might benefit from clarifying the output interpretation, but the presence of an output schema reduces the need.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description maps parameter names to company attributes but adds no new semantic detail beyond what the input schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies a clear action: assess bid qualification by matching company's areas, categories, qualifications, and keywords. It distinguishes from siblings like 'explain_bid_fit' by focusing on MVP judgment and internal confirmation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It states the tool is for internal confirmation before external DB linkage and not a final decision, providing clear usage context. However, it does not explicitly mention when not to use it or suggest specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_bid_calendar入札締切カレンダーICSA
Read-onlyIdempotent
Inspect

入札の提出期限・開札日・納入期限をGoogle Calendar/OutlookへインポートできるICS形式で返す。Generate an ICS calendar file with bid deadlines for Google Calendar or Outlook. Buat file kalender ICS untuk tenggat waktu tender di Google Calendar atau Outlook.

ParametersJSON Schema
NameRequiredDescriptionDefault
bid_keyYessearch_bids、rank_bids、またはlist_recent_bidsが返したKeyフィールド
target_urisNo抽出対象URL。省略時は検索結果の公式公告ページ・添付資料URLから最大3件を使う。
fetch_documentsNotrueの場合、PDF/HTML抽出結果の説明会日時・提出期限をカレンダーに追加する。

Output Schema

ParametersJSON Schema
NameRequiredDescription
icsYes
eventsYes
formatYes
filenameYes
eventCountYes
attributionYes
missingDatesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already confirm read-only, idempotent, non-destructive. The description adds valuable behavioral context: it returns ICS format, includes specific date types, and explicitly states that missing question deadlines are placed in 'missingDates' rather than fabricated. This goes beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two short, front-loaded sentences with no wasted words. Every sentence provides essential information about the tool's function and behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description need not explain return values. It covers the ICS format, dates included, and handling of missing data. Annotations cover safety. Slight lack of explicit prerequisites or side effects, but adequate for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter (bid_key) has schema description specifying it comes from certain tools. With 100% schema coverage, the description adds no further parameter details. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool returns an ICS calendar file with specific bid deadlines (submission, internal confirmation, opening, delivery) given a bid key. It also explains handling of missing question deadlines. This distinguishes it from sibling tools like get_bid_detail or search_bids.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is used to get a calendar export for a bid, but does not explicitly state when to use it over alternative tools (e.g., get_bid_detail for other info). No exclusion criteria or when-not-to-use guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_bid_review_packet入札社内検討パックA
Read-onlyIdempotent
Inspect

入札1件の判断サマリー・理由・リスク・要件・締切・次アクションをMarkdown社内メモとして返す。Generate a Markdown review packet with decision summary, risks, and next actions for one bid. Buat ringkasan tinjauan Markdown dengan keputusan, risiko, dan langkah selanjutnya untuk satu tender.

ParametersJSON Schema
NameRequiredDescriptionDefault
bid_keyYessearch_bids、rank_bids、またはlist_recent_bidsが返したKeyフィールド
target_urisNo抽出対象URL。省略時は検索結果の公式公告ページ・添付資料URLから最大3件を使う。
avoid_keywordsNo
due_within_daysNo
fetch_documentsNotrueの場合、PDF/HTML抽出結果を社内検討メモへ反映する。
preferred_keywordsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
bidYes
titleYes
calendarYes
markdownYes
rankedBidYes
attributionYes
requirementsYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds that the output is Markdown and intended for pasting into Google Docs/Notion. This context is valuable beyond the annotations, though it does not detail data source or computational complexity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. The description is front-loaded with the primary purpose and output format, then adds usage context. Every word is meaningful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with only 25% schema coverage, the description leaves three parameters unexplained. The output schema exists, so return values are covered, but the tool's behavior regarding how keywords and due_within_days affect the memo is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 25%; only 'bid_key' has a description. The description does not explain 'avoid_keywords', 'due_within_days', or 'preferred_keywords'. With low coverage, the description should compensate but fails to clarify the role of these parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool takes a single bid key and returns a Markdown internal review memo with specific sections (decision summary, reason, risk, etc.). It clearly distinguishes from sibling tools like search or analyze by specifying the output format and intended use for internal documentation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when a comprehensive internal review memo is needed, but it does not explicitly state when to use this tool over alternatives like 'assess_bid_qualification' or 'explain_bid_fit'. No exclusion criteria or alternative suggestions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

draft_bid_questions入札質問書ドラフトA
Read-onlyIdempotent
Inspect

入札1件の発注者への質問案をMarkdownで返す。提出前に公式書類と照合すること。Draft clarification questions for a bid in Markdown format. Buat draf pertanyaan klarifikasi untuk satu tender dalam format Markdown.

ParametersJSON Schema
NameRequiredDescriptionDefault
bid_keyYessearch_bids、rank_bids、またはlist_recent_bidsが返したKeyフィールド
target_urisNo抽出対象URL。省略時は検索結果の公式公告ページ・添付資料URLから最大3件を使う。
fetch_documentsNotrueの場合、PDF/HTML抽出結果を使って質問案を絞り込む。

Output Schema

ParametersJSON Schema
NameRequiredDescription
bidYes
titleYes
markdownYes
questionsYes
attributionYes
reviewNotesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the description does not need to restate that. It adds value by revealing the output format (Markdown) and the draft nature with a verification step. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence but conveys both the core functionality and the important verification context efficiently. It is slightly verbose but not wasteful.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, output schema present), the description adequately covers the tool's behavior, output format, and usage requirement (draft verification). No gaps are evident.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the only parameter, clearly stating it comes from specific previous tool outputs. The description does not add extra semantic meaning beyond the schema, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool drafts questions for a single bid key and returns them in Markdown format, specifying both the resource (bid questions) and action (draft), distinguishing it from siblings like 'assess_bid_qualification' or 'create_bid_review_packet'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly notes that the output is a draft requiring verification against official announcements, specifications, and formats before submission, providing clear usage context. It does not mention alternatives or when not to use, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

explain_bid_fit入札追跡判断の説明A
Read-onlyIdempotent
Inspect

1件の入札について追うべきか・要確認か・見送るべきかを理由・リスク・次アクション付きで説明する。Explain bid fit with rationale, risks, and next actions for one bid. Jelaskan kesesuaian tender dengan alasan, risiko, dan langkah selanjutnya.

ParametersJSON Schema
NameRequiredDescriptionDefault
bid_keyYessearch_bids、rank_bids、またはlist_recent_bidsが返したKeyフィールド
target_urisNo抽出対象URL。省略時は検索結果の公式公告ページ・添付資料URLから最大3件を使う。
avoid_keywordsNo避けたい語句。例: 工事、常駐、夜間。
due_within_daysNo
fetch_documentsNotrueの場合、PDF/HTML抽出結果を追跡判断の説明に反映する。
preferred_keywordsNo優先したい語句。例: ソフトウェア、保守、クラウド。

Output Schema

ParametersJSON Schema
NameRequiredDescription
rankedBidYes
fitSummaryYes
attributionYes
scoringPolicyYes
confirmationChecklistYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, so the description adds value by explaining the output nature (reasons, risks, next actions) and usage context (internal memo). It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys the core purpose and usage context without extraneous detail. Every phrase adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters and an output schema, the description covers purpose, output, and usage context. It could note that the bid_key must come from previous tools, but overall it is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 75% with descriptions for bid_key, avoid_keywords, and preferred_keywords. The description adds no additional parameter information, missing an opportunity to explain due_within_days. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: explaining whether to pursue, needs confirmation, or pass for a single bid key, with reasons, risks, and next actions. It specifies it's an internal memo, not a final decision, which distinguishes it from sibling tools like assess_bid_qualification or rank_bids.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied: used after obtaining a bid key from search/list tools, and as an internal memo before official confirmation. However, there is no explicit guidance on when not to use it or alternative tools, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

export_bid_shortlist入札検討shortlist CSVA
Read-onlyIdempotent
Inspect

入札を検索・ランキングしてGoogle Sheets/Excel用のCSVを返す。スコア・判断・リスク・次アクション付き。Search and rank bids, then export a CSV shortlist for Google Sheets/Excel with scores and next actions. Cari, peringkat, dan ekspor shortlist CSV untuk Google Sheets/Excel.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo自由記述キーワード。複数キーワードはAND結合で検索されます。
categoryNo
due_afterNo入札書提出期限 YYYY-MM-DD 以降
due_beforeNo入札書提出期限 YYYY-MM-DD 以前
prefectureNo
issued_afterNo公告日 YYYY-MM-DD 以降
project_nameNo件名で絞り込む場合に指定します。
certificationNo
issued_beforeNo公告日 YYYY-MM-DD 以前
opening_afterNo開札日 YYYY-MM-DD 以降
avoid_keywordsNo避けたい語句。例: 工事、常駐、夜間。
opening_beforeNo開札日 YYYY-MM-DD 以前
procedure_typeNo
due_within_daysNo
fetch_documentsNotrueの場合、上位案件のPDF/HTML抽出結果をCSVに参加資格・提出期限列として追加する。
shortlist_limitNo
period_end_afterNo納入期限日 YYYY-MM-DD 以降
organization_nameNo
period_end_beforeNo納入期限日 YYYY-MM-DD 以前
preferred_keywordsNo優先したい語句。例: ソフトウェア、保守、クラウド。

Output Schema

ParametersJSON Schema
NameRequiredDescription
csvYes
formatYes
columnsYes
filenameYes
attributionYes
rankedCountYes
scoringPolicyYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds value by specifying the output format (CSV) and content (scores, judgment, etc.), which are beyond the annotations. However, it does not disclose any additional behavioral traits such as rate limits or authentication needs, but given the strong annotation coverage, the description is sufficiently informative.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, consisting of two short sentences that convey the core functionality without unnecessary detail. It is well-structured and front-loaded with the key action (search and rank) and output (CSV).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (20 parameters, output schema exists), the description provides a high-level overview that covers the main purpose and output. It does not delve into filtering options or output schema details, but that is acceptable because the output schema exists. The description is complete enough for an agent to understand the tool's role and when to use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 20 parameters and 60% schema description coverage, the description itself provides no additional semantic information about the parameters. It does not explain which parameters are key or how they affect the shortlisting, leaving a gap that the schema only partially fills. The description should at least mention that 'query' and 'limit' are primary for searching.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to search and rank government procurement bids and return a CSV for internal consideration, including scores, judgment, reasons, risks, and next actions. This differentiates it from sibling tools like search_bids and rank_bids which return raw data or rankings without CSV export.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests usage for exporting a shortlist CSV, but it does not explicitly state when to use this tool versus alternatives like search_bids or rank_bids. No usage context, prerequisites, or exclusions are provided, leaving the agent to infer appropriate usage cues.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

extract_bid_requirements入札要件抽出MVPA
Read-onlyIdempotent
Inspect

1件の入札の参加条件・期限・確認すべきPDFを構造化する。PDFは保存しない。Extract bid requirements, deadlines, and PDF checklist from metadata and attachments without storing files. Ekstrak persyaratan tender, tenggat waktu, dan daftar PDF tanpa menyimpan berkas.

ParametersJSON Schema
NameRequiredDescriptionDefault
bid_keyYessearch_bids、rank_bids、またはlist_recent_bidsが返したKeyフィールド
target_urisNo抽出対象URL。省略時は検索結果の公式公告ページ・添付資料URLから最大3件を使う。
fetch_documentsNotrueの場合、公式公告ページまたは添付資料を一時取得して要件抽出を試みる。

Output Schema

ParametersJSON Schema
NameRequiredDescription
bidYes
attributionYes
safetyNotesYes
extractionPlanYes
documentTargetsYes
knownRequirementsYes
rawExtractionTextNo
extractionWarningsYes
missingRequirementsYes
extractedRequirementsNo
extractedFromDocumentsYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, idempotent, non-destructive. Description adds that PDF body is not saved, reinforcing safety and non-persistence. This goes beyond annotations to provide user-facing behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with clear structure: action, source, output, and role. No redundant words, effectively front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description adequately explains scope (single bid), processed data (metadata and attachments), output types (conditions, deadlines, PDFs), and usage context (safe requirement organization). No gaps for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a description for the single required parameter (bid_key). Description restates that it works for one bid key but does not add new semantics beyond the schema's description of where the key comes from.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb ('抽出', extract) and resource ('入札要件', bid requirements) for a single bid key. It lists specific outputs (participation conditions, deadlines, PDFs to check) and distinguishes from siblings by focusing on requirement extraction before AI integration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Indicates usage for a single bid key and positions the tool as a safe pre-processing step before Gemini/Document AI. Provides context on when to use (before AI integration) but does not explicitly contrast with sibling tools or state when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_bid_detail官公需入札詳細A
Read-onlyIdempotent
Inspect

入札Keyから1件の詳細を取得する。添付資料は保存せず公式サイトのURIのみ返す。Fetch full details for one bid by key without storing attachments. Ambil detail lengkap satu tender berdasarkan kunci tanpa menyimpan lampiran.

ParametersJSON Schema
NameRequiredDescriptionDefault
bid_keyYessearch_bids または list_recent_bids が返した Key フィールド

Output Schema

ParametersJSON Schema
NameRequiredDescription
bidsYes
queryYes
searchHitsYes
attributionYes
returnedCountYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already convey read-only, idempotent, non-destructive behavior. The description adds valuable insight: it does not save attachments and only returns the official site URI, which goes beyond annotations and clarifies side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences, front-loading the core action and a key behavioral constraint. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, output schema exists), the description fully covers what the agent needs: purpose, key source for input, and behavioral nuance (no attachment saving). It is sufficient without further elaboration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter bid_key is well-documented in the schema (source from search_bids/list_recent_bids). The description repeats this but does not add further semantic details beyond what the schema provides, so baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves one bid detail using a bid key, specifying it does not save attachments but returns only the URI. This distinguishes it from sibling tools like list_recent_bids or search_bids by focusing on singular detail retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates the bid_key comes from search_bids or list_recent_bids, providing context for when to use the tool. However, it lacks explicit guidance on when not to use it or alternatives beyond implied differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_recent_bids直近の官公需入札一覧B
Read-onlyIdempotent
Inspect

過去1〜30日間に公告された新着官公需入札を一覧する。毎朝の営業チェックに使う。List recently published bid notices from the past 1–30 days for daily morning sales checks. Daftar pengumuman tender baru dalam 1–30 hari terakhir untuk cek pagi harian.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
limitNo
categoryNo
prefectureNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
bidsYes
queryYes
searchHitsYes
attributionYes
returnedCountYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, idempotent, and non-destructive behavior. The description adds that it returns newly posted bids within a date range, but does not disclose other behaviors like output ordering or pagination. Given annotations, this is adequate but not enriched.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, concise and front-loaded with the main action. It wastes no words, though could benefit from a slightly more structured format (e.g., listing parameters).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no required fields, and an output schema, the description provides a high-level overview but lacks detail on parameter semantics and result characteristics. It is adequate for a simple tool but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate. It mentions the 'days' parameter implicitly ('過去1日から30日間') and 'prefecture' via '地域別', but does not explain 'limit' or 'category' parameters. Two of four parameters lack any explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists recent bids announced within the past 1-30 days, and provides usage context (daily morning checks, regional checks). It distinguishes itself from siblings by focusing on recent postings, though it could be more explicit about how it differs from 'search_bids'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description suggests specific use cases (daily checks and regional new posting checks), implying when to use this tool. However, it does not explicitly compare to sibling tools or state when not to use it, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rank_bids追うべき入札ランキングA
Read-onlyIdempotent
Inspect

官公需入札を検索し、追うべき順にスコアリングする。候補整理用であり参加可否の最終判断ではない。Search and score bids by follow-up priority using AI Bid Radar. Cari dan peringkat tender berdasarkan prioritas tindak lanjut dengan AI Bid Radar.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo自由記述キーワード。複数キーワードはAND結合で検索されます。
categoryNo
due_afterNo入札書提出期限 YYYY-MM-DD 以降
due_beforeNo入札書提出期限 YYYY-MM-DD 以前
prefectureNo
issued_afterNo公告日 YYYY-MM-DD 以降
project_nameNo件名で絞り込む場合に指定します。
certificationNo
issued_beforeNo公告日 YYYY-MM-DD 以前
opening_afterNo開札日 YYYY-MM-DD 以降
avoid_keywordsNo避けたい語句。例: 工事、常駐、夜間。
opening_beforeNo開札日 YYYY-MM-DD 以前
procedure_typeNo
due_within_daysNoこの日数以内に提出期限が来る案件を優先します。
shortlist_limitNoランキング結果として返す最大件数。
period_end_afterNo納入期限日 YYYY-MM-DD 以降
organization_nameNo
period_end_beforeNo納入期限日 YYYY-MM-DD 以前
preferred_keywordsNo優先したい語句。例: ソフトウェア、保守、クラウド。

Output Schema

ParametersJSON Schema
NameRequiredDescription
queryYes
rankedBidsYes
searchHitsYes
attributionYes
rankedCountYes
returnedCountYes
scoringPolicyYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, idempotent, etc. Description adds modest context about its role in the pipeline but no additional behavioral traits like rate limits or side effects. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no filler, front-loaded with key purpose and boundary conditions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 20 parameters, high schema coverage, and explicit output schema, description adequately completes the picture by clarifying the tool's place in the workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 70%, so baseline is 3. Description does not elaborate on any parameters beyond what schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Describes searching and scoring bids with a specific verb (スコアリングする) and resource (官公需入札), and distinguishes from sibling tools like search_bids by adding ranking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States it is for candidate organization only, not final judgment, providing clear context. Does not explicitly mention alternatives but sets expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_bids官公需入札検索A
Read-onlyIdempotent
Inspect

日本全国の官公需入札情報を検索する。全文検索は query、件名は project_name、発注機関は organization_name を使う。Search Japanese public procurement bids by keyword, project name, or organization. Cari tender pengadaan pemerintah Jepang berdasarkan kata kunci, nama proyek, atau instansi.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo自由記述キーワード。複数キーワードはAND結合で検索されます。
categoryNo
due_afterNo入札書提出期限 YYYY-MM-DD 以降
due_beforeNo入札書提出期限 YYYY-MM-DD 以前
prefectureNo
issued_afterNo公告日 YYYY-MM-DD 以降
project_nameNo件名で絞り込む場合に指定します。
certificationNo
issued_beforeNo公告日 YYYY-MM-DD 以前
opening_afterNo開札日 YYYY-MM-DD 以降
opening_beforeNo開札日 YYYY-MM-DD 以前
procedure_typeNo
period_end_afterNo納入期限日 YYYY-MM-DD 以降
organization_nameNo
period_end_beforeNo納入期限日 YYYY-MM-DD 以前

Output Schema

ParametersJSON Schema
NameRequiredDescription
bidsYes
queryYes
searchHitsYes
attributionYes
returnedCountYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only, open-world, idempotent, non-destructive nature. Description adds behavioral details like AND combination for multiple keywords and date range usability, enhancing transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose, second details search modes. Front-loaded with essential info, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 16 parameters and presence of output schema, description covers key search modes and date filters. Could elaborate on certification, procedure_type, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 63%, but description adds significant meaning by explaining that query is full-text, project_name is title-only, and organization_name is for ordering organization. Also clarifies date filter purposes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it searches Japanese public procurement bids, specifying different search modes (full-text, title, organization) and date filters. Distinguishes itself from sibling tools like get_bid_detail and list_recent_bids by being the primary search function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use each parameter (query for full-text, project_name for title, organization_name for ordering org). Does not explicitly exclude alternatives but gives clear context for each field.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_bids_app官公需入札検索テーブルA
Read-onlyIdempotent
Inspect

日本全国の官公需入札情報を検索し、MCP Apps対応クライアントでは検索結果を表で表示する。非対応クライアントでも通常のテキスト要約とstructuredContentを返す。

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryNo自由記述キーワード。複数キーワードはAND結合で検索されます。
categoryNo
due_afterNo入札書提出期限 YYYY-MM-DD 以降
due_beforeNo入札書提出期限 YYYY-MM-DD 以前
prefectureNo
issued_afterNo公告日 YYYY-MM-DD 以降
project_nameNo件名で絞り込む場合に指定します。
certificationNo
issued_beforeNo公告日 YYYY-MM-DD 以前
opening_afterNo開札日 YYYY-MM-DD 以降
opening_beforeNo開札日 YYYY-MM-DD 以前
procedure_typeNo
period_end_afterNo納入期限日 YYYY-MM-DD 以降
organization_nameNo
period_end_beforeNo納入期限日 YYYY-MM-DD 以前

Output Schema

ParametersJSON Schema
NameRequiredDescription
bidsYes
queryYes
searchHitsYes
attributionYes
returnedCountYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, etc. The description adds behavioral details about output format (table for compatible clients, text+structuredContent otherwise), which is not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences, front-loading the main purpose and then adding output details. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 16 parameters and an output schema exists, the description is minimal. It covers output format but omits guidance on parameter usage, filtering options, or expected results. Adequate but incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 63% (moderate), but the description does not discuss any parameters or add meaning beyond the schema. Many parameters (37%) lack descriptions in both schema and description, so the description fails to compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (search) and resource (public procurement bids in Japan). It also specifies output format behavior for different clients. However, it does not explicitly differentiate from the sibling tool 'search_bids', which likely has similar functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for searching bids but provides no explicit guidance on when to use this tool versus alternatives like 'search_bids' or other sibling tools. No when-not or context rules are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

summarize_bids_by_org発注機関別の入札傾向分析A
Read-onlyIdempotent
Inspect

発注機関名を指定してカテゴリ別・公示種別別の入札傾向と直近案件を集計する。Summarize bid trends and recent notices for a specific procurement organization by category and procedure type. Ringkas tren tender dan pengumuman terbaru untuk instansi tertentu berdasarkan kategori.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
sinceNoYYYY-MM-DD 以降。未指定なら過去1年。
organization_nameYes分析対象の発注機関名

Output Schema

ParametersJSON Schema
NameRequiredDescription
totalHitsYes
categoriesYes
attributionYes
returnedCountYes
procedureTypesYes
recentProjectsYes
organizationNameYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint=false, covering safety. The description adds behavioral context about aggregating by category and announcement type, which goes beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded with the key parameter 'organization_name'. However, it lacks structured formatting like bullet points or explicit parameter mentions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema and rich annotations, the description provides adequate context for the tool's purpose (aggregate trends and recent cases). However, it omits details about the 'limit' parameter and does not clarify what constitutes 'recent cases'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 67%, with 'limit' having no description in schema or description. The description does not add meaning beyond the schema for the other parameters. The missing parameter explanation is not compensated.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb '集計する' (aggregate) and resource '発注機関別の入札傾向' (bidding trends by organization). It distinguishes this tool from siblings like 'get_bid_detail' and 'list_recent_bids' by focusing on trend analysis per organization.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for analyzing trends by organization but does not explicitly state when to use this tool over alternatives or provide exclusion criteria. Sibling tools exist but no guidance is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.