HoShy
Server Details
Search Rakuten Ichiba products and compare prices via Claude. Zero setup, no API key needed.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Negimaru1025/HoShy
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 3 of 3 tools scored.
Each tool targets a distinct task: price comparison for a specific item, product recommendation by keyword and budget, and hotel search with location and dates. No overlap.
All tools use consistent snake_case and verb_noun pattern (compare_prices, recommend_for_consumer, search_business_hotel).
Three tools is on the low side but covers the core functions of comparing prices, recommending products, and searching hotels. Reasonable for a focused utility server.
The tool surface covers basic search and comparison but lacks operations like managing favorites, booking hotels, or viewing detailed product info. Some gaps exist for a full shopping/travel assistant.
Available Tools
3 toolscompare_prices価格比較(最安値を探す)ARead-onlyInspect
商品名・型番を指定して楽天市場内で最安値を比較します。送料・ポイント還元込みの実質価格で順位付けして返します。
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | 商品名・型番・キーワード(例: 「Anker PowerCore 10000」「iPhone15 ケース」) | |
| spu_rate | No | 楽天ポイントSPU倍率(デフォルト5) | |
| max_price | No | 上限価格(円) | |
| max_results | No | 表示件数(最大10件、デフォルト5件) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true. Description adds that results are ranked by effective price including shipping and points, which is valuable behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff, purpose front-loaded. Every word is useful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explains return format (ranking by effective price). Lacks mention of error handling or pagination, but sufficient for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so description does not need to add parameter details. Baseline 3 applies as description does not add extra meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'compare' and resource 'prices within Rakuten Market', specifying the ranking by effective price including shipping and points. Sibling tools are unrelated, so no confusion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for finding lowest prices, but no explicit when-to-use or alternatives. Sibling tools are distinct, so risk of misuse is low, but lacks explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recommend_for_consumerおすすめ商品を探すARead-onlyInspect
楽天市場でキーワードと予算からおすすめ商品を探します。ポイント還元後の実質価格とレビュー評価で上位5件で返します。
| Name | Required | Description | Default |
|---|---|---|---|
| hits | No | 表示件数(最大10件) | |
| budget | No | 予算(円)。指定するとポイント実質価格が予算内の商品を優先します | |
| keyword | Yes | 探したい商品のキーワード | |
| spu_rate | No | 楽天ポイントSPU倍率。楽天カードなし=3、カードあり=5、カード+アプリ=7(デフォルト5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true. Description adds valuable behavioral context: ranking logic (net price after points and review rating), default return count (5 items), and budget constraint affect results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences effectively communicate purpose and return format. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description explains purpose and result criteria but does not detail output structure. With no output schema, agent may need to infer return fields, which is a gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all parameters with descriptions. Description mentions keyword and budget but does not add significant meaning beyond schema. Baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it finds recommended products on Rakuten Market by keyword and budget, returns top 5 based on net price after points and review rating. Differentiates from siblings compare_prices and search_business_hotel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use for product recommendation but does not explicitly state when to use alternatives or when not to use. No comparison with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_business_hotel出張ホテルを探すARead-onlyInspect
目的地・日程・予算・移動時間を指定して楽天トラベルでビジネスホテルを検索します。Google Mapsで実際の移動時間を計算して絞り込みます。
| Name | Required | Description | Default |
|---|---|---|---|
| hits | No | 表示件数(最大10件) | |
| adult_num | No | 宿泊人数(大人) | |
| destination | Yes | 目的地(駅名・地名・施設名)例: 「東京駅」「渋谷スクランブル交差点」 | |
| travel_mode | No | 移動モード: walking=徒歩 / transit=電車 | walking |
| checkin_date | Yes | チェックイン日 (YYYY-MM-DD) | |
| checkout_date | Yes | チェックアウト日 (YYYY-MM-DD) | |
| budget_per_night | No | 1人1泊の上限予算(円)例: 6000 | |
| max_travel_minutes | No | 目的地までの最大移動時間(分) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, which is consistent with a search tool. The description adds value by specifying that Google Maps is used to compute actual travel time for filtering, disclosing an important behavioral detail beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loading the core purpose and method. Every sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers input criteria and the Google Maps integration, it does not mention the output format or pagination behavior. Since there is no output schema, the description should provide more context on what the tool returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% parameter description coverage, so the description adds little value beyond summarizing the parameter categories (destination, dates, budget, travel time). Baseline score of 3 is appropriate as schema already sufficiently explains each parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for business hotels on Rakuten Travel using destination, dates, budget, and travel time, with Google Maps filtering. It effectively differentiates from sibling tools (compare_prices, recommend_for_consumer) which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (searching business hotels with travel time constraints) but does not explicitly state when not to use or mention alternative tools. However, the siblings are sufficiently different, so the guidance is clear but not exhaustive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!