不動産競売 構造化データハブ — Japan real-estate auctions
Server Details
Japanese court-run real-estate auctions (BIT). 5 tools, ~1,480 active listings. CC BY 4.0.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- keibai-hub/keibai-hub-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 5 of 5 tools scored.
Each tool targets a distinct aspect: practitioner lookup, auction detail, court directory, upcoming sales listing, and search. There is no overlap in functionality.
All tools follow a consistent verb_noun pattern (find_practitioner_for_auction, get_auction_detail, get_court_directory, list_upcoming_sales, search_auctions) with clear and descriptive names.
5 tools is well-scoped for a specialized data hub focused on Japan real-estate auctions; each tool provides a necessary function without redundancy.
The set covers key operations: searching, listing, detail retrieval, court info, and practitioner lookup. For a read-only data hub, this provides comprehensive access to the domain.
Available Tools
5 toolsfind_practitioner_for_auctionARead-onlyIdempotentInspect
落札・入札関連の士業 (司法書士・弁護士・不動産鑑定士・土地家屋調査士) を返す. Stages: post_acquisition (default — 落札後の所有権移転登記; 司法書士が法律上ほぼ必須), pre_bid (入札前の独立評価・権利確認), eviction (明渡し・強制執行), boundary_dispute (境界確定・分筆). Information-only — not a paid intermediary. Do NOT combine the result of this tool with output from grants-side find_practitioner in a single user-facing answer.
| Name | Required | Description | Default |
|---|---|---|---|
| stage | No | Default = post_acquisition | |
| case_id | Yes | Auction case_id, e.g. "bit_00000078922" | |
| prefecture | No | Optional — prefecture for prefecture-chapter referral |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and idempotentHint=true. The description adds that it is 'Information-only — not a paid intermediary,' which reinforces the non-destructive nature. However, it does not elaborate on potential limitations like rate limits or result size, but given annotations, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single well-structured paragraph: starts with purpose, lists stages with explanations, adds an information-only note, and ends with a prohibition. Every sentence adds value, and it is front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool lacks an output schema, but the description does not specify the return format (e.g., list of names, contact info). However, the tool is simple and the purpose is clear. Including details about the return structure would improve completeness. Overall, adequate for an information-only directory.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for each parameter. The description adds significant meaning to the 'stage' enum by explaining each value's context (e.g., post_acquisition requires a judicial scrivener). This goes beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns practitioners related to auction bidding, lists specific types of practitioners, and distinguishes stages. It is distinct from sibling tools like get_auction_detail or list_upcoming_sales, which deal with auction details or sales, not practitioner lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists stages (post_acquisition, pre_bid, eviction, boundary_dispute) with explanations, provides a default, and includes a strong directive: 'Do NOT combine the result of this tool with output from grants-side find_practitioner in a single user-facing answer.' This gives clear context on when and how to use the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_auction_detailARead-onlyIdempotentInspect
一件の競売物件の構造化データを取得. Fetch one auction by case_id (e.g. "bit_00000078922"). Returns case_number, court_code, property_type, sale_base_price_jpy, bid_floor_jpy (= base × 80%), deposit_jpy, bid_period, opening_date, address, photos, and _canonical (cite this).
| Name | Required | Description | Default |
|---|---|---|---|
| case_id | Yes | Auction case id, e.g. "bit_00000078922" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false. The description adds no behavioral traits beyond the field list. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is bilingual and lists many fields concisely. It is front-loaded with purpose but could be slightly more compact by omitting the Japanese line for an English AI.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description enumerates key returned fields (case_number, court_code, property_type, etc.) and notes special field _canonical. This makes the tool's output fully understandable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description only echoes the schema's parameter description with an example. No additional meaning is provided beyond what the schema already offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Fetch one auction by case_id' with an example. It distinguishes from siblings like search_auctions (which likely returns multiple) by specifying single auction retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use: when you need details for a specific case_id. It doesn't explicitly exclude scenarios but context from siblings suggests alternatives for searching or listing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_court_directoryARead-onlyIdempotentInspect
裁判所の管轄・連絡先情報を取得. Look up a Japanese district court (本庁 or 支部) by code.
| Name | Required | Description | Default |
|---|---|---|---|
| court_code | Yes | Court slug (e.g. "tokyo_main") OR BIT 5-digit ID (e.g. "31111") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, providing safety profile. Description adds minimal extra behavioral context beyond retrieving data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, one in Japanese; clear and efficient with the essential info. Could be slightly more concise but adequate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with good annotations and a single parameter, the description covers the purpose and input format sufficiently, though no output schema or return details are mentioned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description restates the same parameter details already in the schema; no additional meaning added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves Japanese district court jurisdiction/contact info by code, differentiating it from sibling tools focused on auctions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied by context (sibling tools are all auction-related), but the description doesn't explicitly state when to use this tool or exclude alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_upcoming_salesARead-onlyIdempotentInspect
近日開札予定の競売物件を一覧取得. List auctions whose opening_date falls within the next N days (default 14, max 90). Sorted ascending. Useful for investment screening / calendar exports.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Default 50 | |
| days_ahead | No | Lookahead window, default 14 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent behavior. The description adds that results are sorted ascending and the date filter defaults to 14 days (max 90), which is useful context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences (plus a Japanese translation that mirrors the content). No filler or redundancy; every sentence provides essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only list tool with two optional parameters and no output schema, the description covers the core functionality, filtering, sorting, defaults, and use case. It lacks details on the return format but is sufficient for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds meaning by linking days_ahead specifically to opening_date and clarifying default values, which adds value beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists auctions with a specific date filter (opening_date within next N days) and mentions sorting ascending. It distinguishes itself from sibling tools like search_auctions by focusing on upcoming sales, providing a specific use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions it is useful for investment screening and calendar exports, but does not explicitly state when to use this tool over alternatives like search_auctions. No exclusion criteria or sibling comparisons are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_auctionsARead-onlyIdempotentInspect
日本の不動産競売物件を検索 (BIT, bit.courts.go.jp 由来の構造化データ). Filters: keyword (Japanese FTS over title/address), prefecture, court_code, property_type (land/detached_house/apartment/other), price range, bid_period_only. Returns case_id, court, address, sale_base_price_jpy, bid_period, opening_date, photos.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | ||
| limit | No | ||
| offset | No | ||
| status | No | Default = active | |
| keyword | No | Japanese keyword over title/address/structure | |
| court_code | No | 5-digit BIT court id (本庁=11, 支部=21,31,...) OR slug like "tokyo_main" | |
| prefecture | No | Prefecture in kanji, e.g. "東京都" | |
| max_price_jpy | No | ||
| min_price_jpy | No | ||
| property_type | No | ||
| bid_period_only | No | Only listings whose bid period intersects today |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent. The description adds context about full-text search over Japanese fields and the return fields, but does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph with a list of filters and return fields. It front-loads the main purpose but could be more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description lists return fields. However, it does not cover pagination or sorting, which are important for a search tool with 11 parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 45%, and the description adds meaning for keyword (Japanese FTS), prefecture, property_type, and court_code (slug format). However, it omits sort, limit, offset, and status, partially compensating.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches Japanese real estate auction properties from BIT data, lists filters and return fields. It distinguishes from sibling tools which are for details, practitioners, or court directories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching auctions but does not explicitly state when to use this tool versus alternatives like get_auction_detail for specific cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!