ZahabPrice — Live Gold & Silver Prices
Server Details
Live gold & silver prices for 37 countries. Tamara/Tabby installment calculators for Saudi Arabia.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 4 of 4 tools scored. Lowest: 3.5/5.
Each tool serves a unique purpose: getting gold price, getting silver price, calculating installment plans, and listing countries. No overlap or ambiguity.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., get_gold_price, list_countries), making them predictable and easy to understand.
With 4 tools, the server is focused on its domain of live gold/silver prices and installment calculations. The count is well-balanced, neither too sparse nor excessive.
The tool set covers the core functionalities of price retrieval and installment calculation, but lacks support for different purity levels (e.g., 24K vs 22K) and price history, which are common use cases.
Available Tools
4 toolscalculate_installmentAInspect
Calculates gold purchase installment plan via Tamara or Tabby. Available in Saudi Arabia only (prices in SAR). Tamara plans: 2/3/4 months (0% fee), 6/9/12 months (17% fee), 24 months (40% fee). Tabby plans: 3 months (0% fee), 4/6 months (1.15% fee). Minimum purchase: 200 SAR.
| Name | Required | Description | Default |
|---|---|---|---|
| price | Yes | Total purchase price in SAR. | |
| months | No | Number of installments. Tamara: 2/3/4/6/9/12/24. Tabby: 3/4/6. | |
| provider | Yes | Installment provider. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses regional restriction, currency, minimum purchase, and fee structures per provider and plan length. However, it does not explicitly state that the tool computes monthly payments or total cost, leaving some ambiguity about the output.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured: one sentence for purpose and region, then bullet-like details for each provider's plans and fees. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers key behavioral aspects and constraints. It lacks an explicit description of the output format (e.g., monthly payment, total fee), but the context of a calculation tool suggests numeric results. Overall, it is sufficient for the tool's moderate complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds value by explaining which months are valid for each provider, fee percentages, and the minimum purchase constraint. This goes beyond the enum descriptions in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool calculates gold purchase installment plans via Tamara or Tabby, with regional and currency restrictions. It distinguishes from siblings (get_gold_price, get_silver_price, list_countries) by focusing on installment calculation, not price retrieval or country listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides detailed context: only for gold purchases in Saudi Arabia with SAR, specific providers and fee structures, and minimum purchase amount. It implicitly advises against use outside these conditions but does not explicitly mention alternatives or when to use sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_gold_priceAInspect
Returns live gold price per gram from zahabprice.com — updated every ~60 seconds. Infer country from context: currency clues (ريال/SAR→sa, جنيه/EGP→eg, درهم/AED→ae, دينار→kw/bh/iq/jo), explicit location, or language. If unsure, use default.
| Name | Required | Description | Default |
|---|---|---|---|
| karat | No | Gold karat. Default: 21 (most common in Arab markets). 24K = pure gold. | |
| country | No | ISO country code. Default: sa (Saudi Arabia). Infer from context when possible. | sa |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, description covers update frequency, source, and inference behavior. Lacks details on error conditions or data format, but sufficient for a simple read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states core functionality, second provides usage guidance. No waste, front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers key aspects: input parameters, update frequency, and inference. Lacks mention of currency for returned price, which is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but description adds valuable context on inferring country from clues and defaults, going beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns live gold price per gram, using a specific verb and resource. It distinguishes from siblings like get_silver_price.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides inference logic for country based on context, and mentions update frequency. Does not explicitly state when not to use or compare with alternatives, but context makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_silver_priceAInspect
Returns live silver price per gram from zahabprice.com — updated every ~60 seconds. Infer country from context same as get_gold_price.
| Name | Required | Description | Default |
|---|---|---|---|
| purity | No | Silver purity. Default: 999 (fine silver). 925 = sterling silver. | |
| country | No | ISO country code. Default: sa (Saudi Arabia). | sa |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It mentions the update frequency (~60 seconds) and data source, but lacks details on safety (read-only), authentication requirements, error handling, or return format. This is insufficient for full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two short, front-loaded sentences. The first sentence provides the core purpose, source, and update frequency. The second sentence gives cross-reference guidance. No unnecessary words are included.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no annotations), the description covers the main action and data source but omits mention of the purity parameter and the return value structure. This leaves some gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters well. The description adds minimal value beyond the schema, only noting that country should be inferred from context analogously to get_gold_price. This is a small addition, so a baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns live silver price per gram from a specific source, updated frequently. The verb 'Returns' and resource 'live silver price per gram' are specific, and the sibling get_gold_price indicates the distinction between silver and gold.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage via the analogy to get_gold_price, but it does not explicitly state when to use this tool over siblings or provide exclusion criteria. The guidance is implicit rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_countriesAInspect
Returns all 37 supported countries with ISO codes and currencies. Use when the user asks about an available country or you need to verify a country code.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It accurately states the behavior (returns all countries with details), but does not disclose any limitations, edge cases, or performance traits. Adequate for a no-parameter read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundancy. Every word serves a purpose. Front-loaded with key action and result.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description covers the purpose, content, and usage context. It lacks details like response format or pagination, but for a simple list tool it is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. Description does not need to add parameter details. The tool is fully defined by the description alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'returns' and clearly identifies resource: '37 supported countries with ISO codes and currencies'. This distinguishes it from sibling tools (calculate_installment, get_gold_price, get_silver_price) which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when the user asks about an available country or you need to verify a country code'. Does not explicitly state when not to use, but context is sufficient for this simple tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!