Inferventis MCP Server
Server Details
Loan & mortgage calculator, compound interest, ROI, crypto prices, FX conversion for AI agents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 20 of 20 tools scored. Lowest: 3.7/5.
Multiple tools for currency conversion (e.g., currency_convert, currency_convert_lite, currency_fx_lite) and cryptocurrency (crypto_price, crypto_price_lite) have overlapping purposes, though descriptions attempt to differentiate. Also, bank_accounts and open_banking_transactions overlap. This may cause confusion for an agent.
All tool names follow a consistent snake_case pattern with clear verb_noun structure (e.g., currency_convert, crypto_price, financial_calculator). Variants like 'lite' are uniformly applied.
With 20 tools, the set is slightly large but still scoped to a financial/utility domain. Each tool has a defined purpose, though some could be merged (e.g., multiple currency converters).
The set covers core financial operations (currency, crypto, stocks, payments, banking, calculations) well. Minor gaps include lack of historical stock data and limited news sources. Overall, agents can accomplish most common tasks.
Available Tools
19 toolsbank_accountsAInspect
Retrieves bank account details and recent transaction history via a connected bank API integration. Returns a list of transactions for the specified account, or for all linked accounts when no account ID is provided. Use bank_accounts when an agent needs to inspect account balances, review recent spending, categorise transactions, or reconcile records against a specific bank account. Prefer open_banking_transactions when the integration uses a PSD2 Open Banking provider (TrueLayer) covering 300+ UK and European banks — open_banking_transactions returns richer transaction metadata including merchant names, categories, and running balances. Prefer stripe_payments when the source of payments is a Stripe merchant account rather than a retail bank account. This tool requires a valid bank API credential to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of transactions to return. Defaults to 20. Maximum 100. | |
| account_id | No | Identifier of the specific bank account to query. When omitted, transactions across all linked accounts are returned. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses connectivity requirement (bank API credential) and default behavior when account_id omitted. However, lacks details on rate limits, error handling, or data freshness. No annotations present, so description carries full burden; still reasonably transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences: first states purpose, second and third provide usage alternatives, last states prerequisite. No fluff, front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, and description only says 'Returns a list of transactions' without detailing fields. Lacks pagination or cursor info. Adequate but could be more complete given no annotations or output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with clear descriptions. Description adds value by explaining the effect of omitting account_id (returns all accounts), going beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves bank account details and transaction history. It explicitly distinguishes from sibling tools open_banking_transactions and stripe_payments by highlighting different use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use this tool vs. alternatives: prefer open_banking_transactions for Open Banking providers, prefer stripe_payments for Stripe merchant accounts. Also mentions credential requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
crypto_fx_ratesAInspect
Converts an amount between any cryptocurrency and fiat currency pair, or between two cryptocurrencies, using real-time exchange rates sourced from CoinAPI. Supports all crypto-to-fiat (BTC/USD, ETH/EUR), fiat-to-crypto (USD/BTC), and cross-crypto (BTC/ETH) conversions. Use crypto_fx_rates when the conversion involves at least one cryptocurrency and a specific amount must be converted. Prefer crypto_price when only the spot price of a coin in fiat is needed without converting a specific amount, or prefer crypto_price_lite for the same spot price with a minimal response schema. Prefer currency_convert or currency_fx_lite when both currencies are fiat — those tools use ECB/Frankfurter or mid-market rates and do not consume a CoinAPI quota. Requires a CoinAPI key to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target currency code. Accepts ISO 4217 fiat codes (USD, EUR, GBP) or cryptocurrency ticker symbols (BTC, ETH, SOL, XRP). Case-insensitive. | |
| from | Yes | Source currency code. Accepts ISO 4217 fiat codes (USD, EUR, GBP) or cryptocurrency ticker symbols (BTC, ETH, SOL, XRP). Case-insensitive. | |
| amount | Yes | Amount to convert. Must be a positive number. Denominated in the 'from' currency. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries full burden. It discloses the data source (CoinAPI), that it consumes quota, and the key requirement. No side effects or destructive actions are mentioned, which is appropriate for a read-only conversion tool. Could be slightly more detailed about response format or rate limits, but sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is five sentences long, each contributing meaningful information. It is front-loaded with the main action and then specifies usage context. Could be trimmed slightly but remains efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters, no output schema, and no annotations, the description covers all essential aspects: purpose, usage conditions, alternatives, prerequisites, and examples. An agent can confidently decide whether to invoke this tool based on the description alone.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds value by providing example pairs and clarifying that amount is denominated in the 'from' currency and that codes are case-insensitive. This goes beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it converts amounts between any cryptocurrency and fiat pair, or two cryptocurrencies, using real-time rates from CoinAPI. It distinguishes between crypto-to-fiat, fiat-to-crypto, and cross-crypto conversions, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: when conversion involves at least one cryptocurrency and live CoinAPI rates are needed. Also provides alternative tools for spot prices (crypto_price) and fiat-only conversions (currency_convert, fx_convert_lite), and notes the requirement for a CoinAPI key.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
crypto_priceAInspect
Retrieves real-time price data for any cryptocurrency listed on CoinGecko. Returns the current price in any fiat currency, 24-hour percentage change, market capitalisation, and 24-hour trading volume. Supports all major cryptocurrencies including Bitcoin (BTC), Ethereum (ETH), Solana (SOL), XRP, Cardano (ADA), Dogecoin (DOGE), Polygon (MATIC), Chainlink (LINK), Avalanche (AVAX), and 10,000+ additional coins. Use crypto_price when an agent needs the full market picture for a digital asset — price, change, market cap, and volume in one call. Prefer crypto_price_lite when only the spot price and 24h change are needed and a smaller response payload is preferred. Use crypto_fx_rates (via CoinAPI) when converting a specific amount between a cryptocurrency and fiat, or between two cryptocurrencies. Do not use this tool for fiat-to-fiat currency conversion (e.g. USD to EUR) — use currency_convert instead. Do not use when historical price data for a specific past date is required — this tool returns live spot prices only.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Cryptocurrency name or ticker symbol. Accepts common symbols (BTC, ETH, SOL, XRP, ADA, DOGE, MATIC, LINK, AVAX) or full names (bitcoin, ethereum, solana). Case-insensitive. | |
| currency | No | Target fiat currency code for the price. Must be a valid ISO 4217 code. Defaults to USD. Examples: usd, eur, gbp, jpy, chf, aud, cad. | |
| include_market_data | No | Whether to include market capitalisation and 24-hour trading volume alongside the price. Defaults to true. Set to false for a lighter response when only the price is needed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It implies a read-only operation ('retrieves') but does not explicitly state non-destructive nature, rate limits, or authentication needs. For a simple data retrieval tool, it is adequate but not fully transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph that fronts the purpose and key details. It is reasonably concise without excess, though it could be slightly trimmed without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has three parameters and no output schema, the description explains return fields and provides examples. It lacks error handling or rate limit info, but for a simple price tool, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already has 100% description coverage, so the description adds value by listing example coins, stating case-insensitivity, and explaining the 'include_market_data' parameter's effect. This goes beyond the schema's descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves real-time price data for any cryptocurrency on CoinGecko, specifying the returned fields (price, 24h change, market cap, volume). It also lists examples, differentiating from the sibling 'crypto_price_basic' by implying comprehensive data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool: when an agent needs current price, valuation, portfolio monitoring, or market reports. However, it does not mention when not to use or explicitly differentiate from alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
crypto_price_liteAInspect
Retrieves the current spot price and 24-hour change for any cryptocurrency using the CoinGecko public API. Returns price, percentage change, and a timestamp. This is a lightweight variant of crypto_price that omits extended market data (market cap, volume) — use it when only the raw price and 24h direction are needed. Prefer crypto_price when the agent also needs market capitalisation, trading volume, or richer structured output. Use crypto_fx_rates when converting a specific amount between a cryptocurrency and fiat (e.g. 'convert 0.5 BTC to USD') rather than looking up a spot price. Supports all major coins including BTC, ETH, SOL, XRP, ADA, DOGE, and 10,000+ CoinGecko-listed assets. Accepts ticker symbols (BTC, ETH) or full names (bitcoin, ethereum). Target currency defaults to USD but accepts any ISO 4217 code.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Cryptocurrency ticker symbol (BTC, ETH, SOL) or full CoinGecko name (bitcoin, ethereum, solana). Case-insensitive. | |
| currency | No | ISO 4217 fiat currency code for the returned price. Defaults to 'usd'. Examples: usd, eur, gbp, jpy. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It states the data source (CoinGecko public API), the returned fields (price, percentage change, timestamp), and mentions it is a lightweight variant. However, it does not disclose potential rate limits or authentication requirements, but for a simple read query this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at four sentences, all essential. It is front-loaded with the core purpose, and every sentence adds distinct value: purpose, output, guidance vs sibling, input flexibility, and default currency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately describes the return fields. It covers usage context, alternative tool, supported inputs, and default behavior. For a simple price retrieval tool, this provides sufficient completeness for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds useful context beyond the schema: it specifies the default target currency (USD), acceptable ISO 4217 codes, and that coin accepts both ticker symbols and full names in a case-insensitive manner. This enhances understanding beyond the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves current spot price and 24-hour change for any cryptocurrency. It distinguishes itself from the sibling tool crypto_price by being a lightweight variant that omits extended market data, making its purpose specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool ('when only the raw price and direction are needed') and when to prefer the alternative ('when the agent also needs market capitalisation, trading volume, or richer structured output'). It also specifies supported assets and input formats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_convertAInspect
Converts a monetary amount from one currency to another using live exchange rates sourced from the Frankfurter API (European Central Bank data). Returns the converted amount, the exact exchange rate applied, and the timestamp of the rate. Supports 30+ currencies including USD, EUR, GBP, JPY, CHF, AUD, CAD, SEK, NOK, DKK, SGD, HKD, and all major ISO 4217 codes. Use currency_convert when an agent needs to convert prices, invoices, salaries, payments, or any financial figure between fiat currencies in real time with full ECB-backed rate metadata. Prefer currency_convert_lite or currency_fx_lite when only the numeric converted amount and rate are needed without metadata. Use currency_rates when the conversion must use a historical rate from a specific past date. Do not use this tool for cryptocurrency conversion — use crypto_fx_rates (amount conversion via CoinAPI) or crypto_price (spot price lookup via CoinGecko).
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | The target currency code in ISO 4217 format. Must be uppercase. Examples: EUR, JPY, CHF, SGD, HKD, SEK. | |
| from | Yes | The source currency code in ISO 4217 format. Must be uppercase. Examples: USD, GBP, EUR, JPY, CHF, AUD, CAD. | |
| amount | Yes | The monetary amount to convert. Must be a positive number. Decimals are supported. Example: 1500.50 to convert one thousand five hundred and fifty cents. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears full burden. It discloses that it uses live exchange rates and returns the converted amount, rate, and timestamp. However, it lacks details on potential limitations, such as rate update frequency, data source reliability, or what happens on network failure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: the first defines the core action, the second provides usage guidance and output details. No redundant words, perfectly front-loaded with the most critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers input (currency codes, amount), output (converted amount, rate, timestamp), and use cases. It does not mention error handling, rate limits, or how often rates update. For a simple conversion tool with full schema coverage and no output schema, this is nearly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds modest value beyond the schema. It confirms that any ISO 4217 pair is accepted and gives examples, but this is already implied by the format description in the schema. The schema already states amount must be positive, so no new semantic information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'converts', the resource 'monetary amount', and the scope 'from one currency to another using live exchange rates', effectively distinguishing it from siblings like currency_convert_basic and fx_converter which may have different features or data sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides use cases ('convert prices, invoices, payments, or financial figures between currencies in real time'), giving clear context. However, it does not mention when not to use this tool or suggest alternatives like currency_convert_basic for simpler needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_convert_liteAInspect
Converts a monetary amount between any two fiat currencies using live exchange rates from the Frankfurter API (European Central Bank data). Returns the converted amount and the exchange rate applied. This is a lightweight variant of currency_convert — minimal response without rate timestamp or source attribution. Use currency_convert_lite when only the converted value and rate are needed and ECB/Frankfurter-sourced rates are preferred. Prefer currency_convert when the agent also needs rate timestamp and richer structured output. Prefer currency_fx_lite for the same minimal output (amount + rate) when the ECB data source is not specifically required — both return identical fields but draw from different rate providers. Use currency_rates when a historical rate from a specific past date is required (e.g. accounting, tax, or audit). Use currency_convert_open as a fallback when Frankfurter is unavailable or rate-limited. Does not support cryptocurrency pairs — use crypto_price or crypto_fx_rates for crypto-to-fiat conversions. Accepts all major ISO 4217 currency codes (USD, EUR, GBP, JPY, CHF, AUD, CAD, SGD, NOK, SEK, DKK, PLN, CZK, HUF, etc.).
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ISO 4217 code of the target currency (e.g. 'EUR', 'JPY', 'CHF'). Case-insensitive. | |
| from | Yes | ISO 4217 code of the source currency (e.g. 'USD', 'GBP', 'EUR'). Case-insensitive. | |
| amount | Yes | Monetary amount to convert. Must be a positive number. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully bears the transparency burden. It discloses the use of the Frankfurter API, the minimal response schema (no rate timestamp, base currency metadata, or attribution), and unsupported currency types. However, it does not explicitly state that the tool is read-only or safe, which is implied but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is adequately sized and front-loaded with the core action. It contains multiple sentences, each adding value (purpose, comparison, limitations). While slightly verbose, it remains efficient and well-structured, earning a 4.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description explains the return values (converted amount and exchange rate). It also covers the data source, limitations (no crypto), and use cases. This provides complete context for an agent to understand and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Since schema description coverage is 100%, the baseline is 3. The description adds that amount must be positive (already in schema) and lists major currency codes, but does not provide additional semantic depth beyond what the schema already conveys.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts monetary amounts between fiat currencies using live exchange rates, specifying the data source (Frankfurter API/ECB). It distinguishes itself from sibling 'currency_convert' as a lightweight variant that returns minimal response, thus meeting the 'specific verb+resource' and sibling differentiation criteria.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool (when only converted value and rate are needed) and when to use the alternative 'currency_convert' (for richer output or historical context). It also clarifies the tool does not support cryptocurrency and directs to 'crypto_price' or 'crypto_fx_rates' for such cases, providing excellent usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_convert_openAInspect
Converts a monetary amount between two fiat currencies using live exchange rates from an open currency exchange API. Returns the converted amount and the rate applied. Use currency_convert_open as an alternative live-rate source when currency_convert (Frankfurter/ECB) or currency_fx_lite are unavailable or rate-limited. The underlying source is an open public exchange rate feed suitable for informational use. Prefer currency_convert or currency_rates when ECB-auditable Frankfurter rates are required for accounting or compliance. Prefer currency_convert_lite for the same minimal output (amount + rate) backed by ECB/Frankfurter rates. Prefer currency_fx_lite for lightweight mid-market conversions. Does not support cryptocurrency pairs — use crypto_fx_rates for any conversion involving a digital asset.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ISO 4217 target currency code (e.g. 'EUR', 'JPY', 'CHF'). Case-insensitive. | |
| from | Yes | ISO 4217 source currency code (e.g. 'USD', 'GBP', 'EUR'). Case-insensitive. | |
| amount | Yes | Monetary amount to convert. Must be a positive number. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description fully discloses the data source (open public exchange rate feed for informational use), return values (converted amount and rate), and scope (fiat only, no crypto). It lacks mention of idempotency but conversion tools are inherently read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, each serving a distinct purpose: main function, return info, usage guidance, and crypto exclusion. No redundant text, front-loaded with core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately covers input, output, rate source, and alternatives. With many siblings, it provides sufficient information for correct tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds context about live rates and fiat currencies but does not improve upon the schema's definition of the three parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it converts monetary amounts between fiat currencies using live exchange rates from an open API, and explicitly differentiates from sibling tools like currency_convert and currency_fx_lite by specifying use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use this tool (as alternative when others are unavailable), when to prefer other tools (for ECB-auditable rates use currency_convert, for lightweight use currency_fx_lite), and when not to use (crypto).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_fx_liteAInspect
Converts a foreign exchange (FX) amount between two fiat currencies using live mid-market rates. Returns the converted amount and the exchange rate applied. Use currency_fx_lite when only the numeric result is required and the ECB/Frankfurter data source is not specifically needed. Prefer currency_convert when richer metadata (rate timestamp, ECB-backed Frankfurter source) is needed. Prefer currency_convert_lite for the same minimal output (amount + rate) when ECB/Frankfurter rates are specifically required — both tools return identical fields but draw from different rate providers. Use currency_rates when a historical rate from a specific past date is required. Use currency_convert_open as an alternative open-rate source. Does not support cryptocurrency pairs — use crypto_fx_rates for crypto-to-fiat or crypto-to-crypto conversions, or crypto_price_lite for a spot price lookup. Accepts all major ISO 4217 currency codes.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ISO 4217 target currency code (e.g. 'EUR', 'CHF', 'AUD', 'CAD'). Case-insensitive. | |
| from | Yes | ISO 4217 source currency code (e.g. 'USD', 'GBP', 'EUR', 'JPY'). Case-insensitive. | |
| amount | Yes | Monetary amount to convert. Must be a positive number. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses return of converted amount and rate, and use of live mid-market rates. Lacks details on rate freshness or error handling, but sufficient for a lightweight tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with core action, followed by usage guidance and exclusions. No redundant sentences, very efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately describes return shape (converted amount and rate) and covers alternatives and exclusions. No output schema, but sufficient for simple use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already covers all three parameters with descriptions. Description reiterates 'major ISO 4217' and positivity, adding little new beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Converts' and specific resource 'FX amount between two fiat currencies using live mid-market rates'. Distinctly differentiates from siblings like currency_convert and crypto_fx_rates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (lightweight, ad-hoc, numeric result) and when to prefer alternatives (richer metadata, Frankfurter rates). Also excludes crypto pairs with clear redirection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_ratesAInspect
Retrieves live and historical fiat currency exchange rates from the Frankfurter API (sourced from the European Central Bank). Supports both real-time conversion and historical rate lookup for any past date, making it the preferred tool when auditable, ECB-sourced rate data is required. Use currency_rates when a rate from a specific past date is required (e.g. accounting, tax, or audit), or when the ECB source must be documented. Prefer currency_convert when only a live conversion is needed with a richer structured response. Prefer currency_convert_lite for lightweight live ECB conversions without historical requirements. Prefer currency_fx_lite when ECB sourcing is not required and only a lightweight live result is needed. Use currency_convert_open when a non-ECB rate source is acceptable and Frankfurter is unavailable. Does not support cryptocurrency pairs — use crypto_fx_rates for any conversion involving a digital asset.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ISO 4217 target currency code (e.g. 'EUR', 'JPY', 'CHF'). Case-insensitive. | |
| from | Yes | ISO 4217 source currency code (e.g. 'USD', 'GBP', 'EUR'). Case-insensitive. | |
| amount | Yes | Monetary amount to convert. Must be a positive number. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool supports historical rate lookup and sources from ECB, but it does not detail potential limitations (e.g., date range, rate availability, error handling). Still, it adds meaningful behavioral context beyond a bare-bones description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by usage scenarios and comparisons. It is somewhat lengthy but every sentence adds value; a minor reduction could improve conciseness without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of multiple sibling tools and the requirement to handle both live and historical rates with a specific data source, the description covers the key contextual points. The lack of an output schema is compensated by the clear purpose and usage guidance. It could mention return format or error behavior for full completeness, but it is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already covers all three parameters (amount, from, to) with 100% description coverage. The description does not add additional meaning or usage details for these parameters beyond what is in the schema, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool retrieves live and historical fiat currency exchange rates from the ECB-sourced Frankfurter API. It distinguishes itself from siblings by emphasizing historical and auditable rate retrieval, making its purpose specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool (historical rates for accounting/tax/audit, ECB source requirement) and explicitly names alternatives for other scenarios: currency_convert for rich live conversions, currency_convert_lite or currency_fx_lite for lightweight live conversions, and crypto_fx_rates for cryptocurrency pairs. This is exemplary.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
financial_calculatorAInspect
Performs precise financial calculations across six calculation types entirely locally with no external API dependency. compound_interest computes the final value and total interest earned on a principal over time at a given annual rate. loan_repayment calculates the monthly payment, total repayable amount, and total interest for a mortgage or loan given the principal, annual rate, and term in months. roi returns return on investment as a percentage and absolute profit or loss, with optional annualised ROI when a holding period is provided. present_value discounts a future cash amount back to its current value using a discount rate. future_value projects a present amount forward at a compounding annual rate. break_even finds the unit volume and revenue at which fixed and variable costs are fully covered by sales. Use this tool when an agent needs to perform any structured financial calculation — loan affordability, investment return, discounted cash flow, or cost analysis. Prefer financial_calculator_lite when only the single headline result is needed rather than a full structured breakdown. Do not use this tool to fetch live market prices or exchange rates — use stock_quote for stock prices, crypto_price for cryptocurrency prices, or currency_convert for FX rates.
| Name | Required | Description | Default |
|---|---|---|---|
| rate | No | Annual interest or discount rate expressed as a percentage. Example: 5 for 5% per year. Used for compound_interest, loan_repayment, present_value, and future_value. | |
| periods | No | Number of time periods. For compound_interest and future_value: years. For loan_repayment: months. For roi: years used for annualised ROI. | |
| principal | No | Starting amount or loan amount in your chosen currency. Used for compound_interest, loan_repayment, and future_value calculations. | |
| final_value | No | Ending value of an investment or asset. Used for roi and present_value calculations. | |
| fixed_costs | No | Total fixed costs of the business or project that do not vary with output volume. Used for break_even calculations. | |
| discount_rate | No | Annual discount rate as a percentage for present value calculations. Represents the required rate of return or cost of capital. | |
| future_amount | No | The target future cash amount to discount back to present value. Used for present_value calculations. | |
| initial_value | No | Starting value of an investment or asset. Used for roi calculations. | |
| price_per_unit | No | The selling price per unit of product or service. Used for break_even calculations. | |
| calculator_type | Yes | The type of financial calculation to perform. Must be one of: compound_interest, loan_repayment, roi, present_value, future_value, break_even. | |
| variable_cost_per_unit | No | The variable cost incurred for each unit produced or sold. Used for break_even calculations. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description clearly states calculations are local and deterministic, but with no annotations, the burden is on the description to disclose all behavioral aspects. It does not mention potential error conditions (e.g., division by zero, negative rates) or output format specifics, which is a gap for a tool with multiple calculation types.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single dense paragraph with 10 sentences. It front-loads the main purpose effectively. While it could be more succinct with bullet points, every sentence serves a clear purpose, earning a high score without being wordy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 calculation types, 11 parameters, no output schema), the description provides thorough coverage: each calculation type's inputs and outputs are explained, and the local computation nature is stated. An agent can fully understand when and how to invoke the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the description adds significant value by explaining which parameters apply to which calculation types, e.g., 'rate: Annual interest... Used for compound_interest, loan_repayment, present_value, and future_value.' This surpasses a baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it performs 'precise financial calculations across six calculation types' and enumerates each type with its specific use, such as 'compound_interest computes the final value and total interest earned'. This clearly differentiates it from sibling data-fetching tools like crypto_price or finnhub_stock_quote.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly guides usage by detailing each calculation type's purpose (e.g., 'loan_repayment calculates the monthly payment... for a mortgage or loan') and states 'All calculations are performed locally with no external API dependency', which implies it should be used for computations rather than data retrieval. However, it stops short of explicitly stating when to avoid this tool in favor of siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
financial_calculator_liteAInspect
Performs common financial calculations locally with no external API dependency. Supports compound interest, loan repayment, return on investment (ROI), present value, future value, and break-even analysis. Returns a single numeric result for the requested calculation type. This is a lightweight variant of financial_calculator — it returns only the result number rather than a full structured breakdown (monthly payment, total interest, annualised ROI, etc.). Use financial_calculator_lite when only the headline figure is needed. Prefer financial_calculator when the agent needs a full breakdown, multiple sub-values, or labelled output fields for compound interest earned, total repayable, or annualised returns. Neither this tool nor financial_calculator fetches live market data — for live prices use stock_quote, crypto_price, or currency_convert.
| Name | Required | Description | Default |
|---|---|---|---|
| rate | No | Annual interest or discount rate as a decimal (e.g. 0.05 for 5%). Required for compound_interest, loan_repayment, present_value, and future_value. | |
| periods | No | Number of compounding periods (years) or loan term in months depending on calculator_type. Required for compound_interest, loan_repayment, future_value, and present_value. | |
| principal | No | Starting amount or loan principal in currency units. Required for compound_interest, loan_repayment, present_value, and future_value. | |
| final_value | No | Final investment value. Required for roi calculations. | |
| initial_value | No | Initial investment amount. Required for roi calculations. | |
| calculator_type | Yes | Type of calculation to perform. Accepted values: 'compound_interest', 'loan_repayment', 'roi', 'present_value', 'future_value', 'break_even'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses local execution and no external API dependency. It notes that the tool does not fetch live market data. No annotations are present, so the description carries the burden; it lacks details on error handling or parameter validation, but is otherwise transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, well-structured, and front-loaded: it states purpose, return value, differentiation from sibling, and usage guidance in a logical flow. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (6 parameters, multiple calculator types, no output schema, no annotations), the description covers purpose, parameter dependencies, and sibling differentiation. It lacks explicit return format details or error handling, but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). The description adds value by associating specific parameters with calculator types (e.g., 'Required for compound_interest, loan_repayment...'), and by clarifying rate format (decimal). This goes beyond the schema's basic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it performs common financial calculations locally, enumerates the supported types (compound interest, loan repayment, etc.), and specifies that it returns a single numeric result. It effectively distinguishes itself from the sibling financial_calculator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises when to use this lite variant versus the full financial_calculator, and also directs the agent to other tools (stock_quote, crypto_price, currency_convert) for live data, providing comprehensive selection guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
open_banking_transactionsAInspect
Retrieves bank account balances and transaction history via PSD2 Open Banking (TrueLayer), covering 300+ UK and European banks. Returns the account balance, ISO 4217 currency code, and up to 100 recent transactions — each with date, merchant description, amount, and category. Supports optional date filtering to narrow the transaction window. Use this tool when an agent needs to inspect a user's spending history, verify a payment has cleared, assess account affordability, categorise recent bank transactions, or produce a financial summary from live bank data. Do not use for payment initiation — this tool is strictly read-only. Do not use for Stripe-specific payment records, subscription billing, or failed charge investigation — use stripe_payments instead. Requires a TrueLayer access token; returns structured mock data if no token is configured.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of transactions to return. Integer between 1 and 100. Defaults to 10. | |
| from_date | No | Optional start date for filtering transactions. ISO 8601 date format: YYYY-MM-DD. Example: 2026-01-01 to retrieve transactions from 1 January 2026 onwards. | |
| account_id | No | TrueLayer account ID to retrieve transactions for. Obtain from a prior account-listing call. Omit to return data for the first connected account. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. It mentions real-time, PSD2 compliance, but lacks details on rate limits, side effects, or response structure. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with front-loaded purpose and key details. Every sentence adds value; no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with all optional parameters and no output schema, the description sufficiently covers purpose, data returned, and use cases. Could be improved by noting output structure, but return fields are mentioned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good descriptions; description adds value by explaining default behavior for omitted account_id and lists returned fields (categorised transactions, merchant names, etc.).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves real-time bank account transaction history and balance data, specifying supported banks, compliance, and returned data fields. It distinguishes from sibling tools like stock_tool and crypto_price.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists use cases: analyzing spending patterns, verifying balances, assessing affordability, etc. Does not mention when not to use or alternatives, but context from sibling tools implies differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
platform_tool_finderAInspect
Discovers the most relevant tools available on this MCP server for a given task using local semantic search (MiniLM-L6-v2 embeddings). Accepts a plain-English description of what needs to be accomplished and returns the best matching tools ranked by relevance, along with their input schemas, pricing tier, and exact call instructions. Use this tool first when you are connected to this server but do not know which specific tool to call — describe your goal and let platform_tool_finder identify the right capability. Do not use this tool if you already know the tool name — call that tool directly instead. Returns up to 10 results ranked by semantic similarity score.
| Name | Required | Description | Default |
|---|---|---|---|
| top_k | No | How many ranked results to return. Integer between 1 and 10. Defaults to 3. Use a higher value when the best tool is ambiguous. | |
| intent | Yes | Plain-English description of what you need to accomplish. Be specific about the goal, not the tool name. Example: 'I need to convert 500 dollars to euros at the current exchange rate' or 'get the latest technology news headlines'. | |
| model_hint | No | Optional: the name of the calling LLM model. Examples: claude, gpt-4, gemini. Used to apply model-specific manifest variant ranking when available. Omit if unknown. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return format (ranked results with schemas, pricing, call instructions), up to 10 results, and the embedding model. Could mention non-destructive nature, but still transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, well-structured, no fluff. First sentence states purpose, second describes input/output, third gives usage guidance, fourth specifies result count.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explains returns adequately. Covers input, behavior, and usage. No missing critical information for task completion.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description adds meaning: top_k default and when to increase, intent should be goal not tool name with examples, model_hint optional for model-specific ranking. Exceeds schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it discovers relevant tools using semantic search. It distinguishes from sibling tools by being a meta-tool for finding which specific tool to call.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use (when you don't know the tool) and when not to use (if you already know the tool name). Provides examples and advises using higher top_k when ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stock_price_liteAInspect
Retrieves the current trading price for a publicly listed stock by ticker symbol. Returns the current price as a single numeric value. This is a lightweight variant of stock_quote — it omits intraday high/low, percentage change, previous close, company name, sector, and exchange metadata. Use stock_price_lite when only the raw current price is needed for a quick lookup or calculation. Prefer stock_quote when the agent also needs price change, intraday range, company information, or a fully structured response suitable for portfolio reporting. Does not support cryptocurrency prices — use crypto_price for full market data (price, volume, market cap) or crypto_price_lite for a lightweight spot price lookup.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Stock ticker symbol in uppercase. Examples: AAPL (Apple), MSFT (Microsoft), TSLA (Tesla), GOOGL (Alphabet), AMZN (Amazon), NVDA (Nvidia). Must be a valid exchange-listed ticker. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Although no annotations are provided, the description clearly states it returns 'the current price as a single numeric value' and explains it is a lightweight variant omitting intraday high/low, percentage change, etc. It does not discuss latency or data source, but for a simple read operation, the behavioral description is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Every sentence is purposeful. The description is front-loaded with the primary purpose, then details the lightweight nature, usage contexts, and exclusions. It is concise and well-structured with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description is fully complete. It covers what the tool does, what it omits, when to use it versus alternatives, and what it does not support. No critical information is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for its single parameter 'symbol', with examples and validation details. The description reinforces the parameter's purpose by explaining it is a stock ticker and provides usage context. This adds value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Retrieves the current trading price for a publicly listed stock by ticker symbol.' It uses a specific verb ('Retrieves') and resource ('current trading price'), and distinguishes itself from the sibling tool 'stock_quote' by highlighting it is a lightweight variant that omits extra data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear guidance on when to use this tool ('when only the raw current price is needed') and when to prefer alternatives ('Prefer stock_quote when...'). It also explicitly states that it does not support cryptocurrency prices, directing to 'crypto_price' for digital assets.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stock_quoteAInspect
Retrieves real-time stock price quotes and company information for any publicly traded company via the Finnhub API. Returns current price, intraday high and low, percentage change from previous close, previous close price, sector, and exchange. Use stock_quote when an agent needs to look up a stock price, check intraday market performance, retrieve company sector data, monitor equity portfolio values, or answer any question about the current trading price of a publicly listed company. Prefer stock_quote over stock_price_lite when the agent needs price change, intraday range, company name, or sector — stock_price_lite returns only the raw current price with no additional context. Do not use for cryptocurrency prices — use crypto_price (CoinGecko, 10,000+ assets) or crypto_price_lite for a lightweight variant. Do not use for fiat currency conversion — use currency_convert or currency_fx_lite. Requires a Finnhub API key to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Stock ticker symbol in uppercase. Examples: AAPL (Apple), MSFT (Microsoft), TSLA (Tesla), GOOGL (Alphabet), AMZN (Amazon), NVDA (Nvidia). Must be a valid exchange-listed ticker symbol. | |
| include_company_info | No | Whether to include company name, sector, and exchange details alongside the price quote. Defaults to true. Set to false for a faster, price-only response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that the tool returns real-time data and requires a configured API key. While it does not detail error handling or rate limits, it provides essential behavioral context beyond what is structured.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a well-structured single paragraph of about 110 words. Every sentence provides value: purpose, return data, usage guidance, exclusions, and prerequisites. No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description lists key return fields (price, high/low, change, previous close, sector, exchange). It covers prerequisite (API key) and distinguishes from siblings. For a 2-parameter tool, this is thorough and enables correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers both parameters with descriptions, and the description adds meaning: it explains that include_company_info controls inclusion of sector/exchange, and notes performance benefit when set to false. This goes beyond schema to clarify parameter behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves real-time stock price quotes and company information, listing specific data points returned (current price, high/low, change, sector, exchange). It uses a specific verb 'Retrieves' and resource, and distinguishes itself from sibling tools like stock_price_lite.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs when to use stock_quote, including contrasts with stock_price_lite for price-only needs, crypto_price for cryptocurrencies, and currency_convert for fiat conversion. Also notes the requirement of a Finnhub API key, providing clear context for agent decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stripe_payment_recordsAInspect
Retrieves payment and charge records from a Stripe merchant account. Returns a list of payment records filtered by the requested query type. Use stripe_payment_records when an agent needs to review recent charges, refunds, disputes, or subscription payments from a Stripe account. This is a lightweight variant of stripe_payments — it returns a simple records array rather than the full structured Stripe response with customer details, metadata, and pagination cursors. Prefer stripe_payments when the agent needs complete Stripe charge objects including customer IDs, payment method details, metadata fields, and processing status breakdowns. Prefer open_banking_transactions or bank_accounts when the payment data source is a bank account rather than a Stripe merchant account. Requires a Stripe API key to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of records to return. Defaults to 10. Maximum 50. | |
| query_type | Yes | Type of payment records to retrieve. Accepted values: 'recent_payments' (latest charges), 'refunds' (refund records), 'disputes' (disputed charges), 'subscriptions' (recurring subscription payments). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must cover behavioral traits. It explains that results are filtered by query_type and are a lightweight variant. However, it does not explicitly state that the operation is read-only or mention any rate limits. The return format is described as a 'simple records array', but details of record fields are omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, then provides usage guidelines and comparisons with siblings in a concise manner. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there is no output schema and no annotations, the description provides adequate context: purpose, usage, alternatives, and parameter details. It could be improved by specifying the structure of returned records, but it is sufficient for an agent to correctly select and invoke the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining the accepted values for query_type ('recent_payments', 'refunds', 'disputes', 'subscriptions') and the default and maximum for limit (defaults to 10, max 50).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves payment and charge records from a Stripe merchant account, filtered by query type. It distinguishes itself from sibling stripe_payments as a lightweight variant returning a simple array, and from open_banking_transactions and bank_accounts for bank data sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (review recent charges, refunds, disputes, subscriptions) and provides alternatives: prefer stripe_payments for full objects with customer details, prefer bank tools for bank data. Also notes the requirement for a configured Stripe API key.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stripe_paymentsAInspect
Retrieves live payment data from a Stripe account via the Stripe API. Supports four query types: recent_payments returns the latest payment intents with status, amount, and currency; failed_charges returns declined or failed charges with failure reasons and error codes; customers returns customer records with name, email, and payment method details; subscriptions returns active and cancelled subscription plans with billing interval and status. Use stripe_payments when an agent needs to investigate payment failures, audit recent transaction activity, retrieve customer billing records, or check subscription status within a Stripe account — it returns full Stripe charge objects with customer IDs, metadata, and processing details. Prefer stripe_payment_records for a lighter-weight Stripe query returning only a simple records array without full Stripe object structure. Do not use for bank account transactions or PSD2 Open Banking data — use open_banking_transactions instead. Do not use for generic bank account history — use bank_accounts instead. Requires a Stripe secret key to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of records to return. Integer between 1 and 25. Defaults to 10. | |
| query_type | Yes | The type of Stripe data to retrieve. Must be one of: recent_payments (latest payment intents), failed_charges (declined or failed charges with failure reasons), customers (customer records with contact and payment method info), subscriptions (billing subscriptions with status and plan details). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It states 'real-time' but does not clarify read-only nature, authentication needs, rate limits, or error behavior. This lack of detail leaves the agent uninformed about side effects or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, concise and front-loaded. The first sentence states the primary purpose, the second adds connection detail, and the third lists use cases. No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two parameters and no output schema, the description covers main use cases and data types. However, it omits return format, error handling, and pagination, which would help completeness. It is adequate but not outstanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters. The description adds use-case context but no new parameter-specific details beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves live payment data from Stripe, listing specific resources (payment intents, failed charges, customer records, subscription status) and verbs (Retrieves, Connects). It distinguishes from siblings like crypto_price or stock_tool by being Stripe-specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use scenarios (investigate failures, audit transactions, check subscriptions, etc.). However, it does not mention when not to use or suggest alternative tools, which is acceptable given the distinct sibling names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
web_news_headlinesAInspect
Retrieves the latest real-time news headlines and article summaries from BBC News and The Guardian across nine topic categories. Returns structured articles with headline, description, source name, article URL, and publication date — sorted most recent first. No API key required. Use this tool when an agent needs current news about a specific topic, wants to summarise today's headlines, needs to research recent events, monitor a subject area for new developments, or build a news briefing. Do not use this tool to read the full content of a specific article — use web_url_reader instead, passing the article URL returned by this tool. Do not use when news from sources outside BBC News and The Guardian is required.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of articles to return. Integer between 1 and 20. Defaults to 10. Use a lower number for a quick briefing and a higher number for comprehensive coverage. | |
| topic | No | News topic category to retrieve headlines for. Accepted values: 'general' (top stories across all categories), 'technology' (tech industry and digital news), 'business' (markets, economy, and corporate news), 'science' (research, environment, and discoveries), 'health' (medical and public health news), 'politics' (government and political news), 'sport' (sports and athletics), 'ai' (artificial intelligence and machine learning news), 'world' (international and global news). Defaults to 'general' if omitted or unrecognised. | |
| keyword | No | Optional search keyword to filter headlines. Only articles whose title or description contains this keyword will be returned. Case-insensitive. Examples: 'OpenAI', 'interest rates', 'climate change'. Omit to return all headlines for the chosen topic. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, but description adequately covers data sources, no-auth requirement, sorted output, and returned fields. Lacks mention of potential rate limits or data freshness, but overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single coherent paragraph with logical flow: function, output, usage guidance. Efficient though slightly lengthy; each sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description lists all returned fields. Covers input parameters fully, source limitations, and usage boundaries. Complete for a data retrieval tool with these optional params.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with well-described parameters. Description adds minimal extra value beyond schema, such as 'no API key required' but does not deepen parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description specifies the tool retrieves real-time headlines and summaries from BBC News and The Guardian across nine topics, listing returned fields and sorting order. It clearly distinguishes from sibling web_url_reader for full article reading.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (current news, summarising, research, monitoring, briefing) and when not to use (reading full article, needing other sources), with direct reference to web_url_reader as alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
web_url_readerAInspect
Fetches any public web page and returns clean, readable plain text stripped of HTML, navigation, scripts, advertisements, and boilerplate. Returns the page title, meta description, word count, and main body text ready for analysis or summarisation. Use this tool when an agent needs to read the content of a specific web page or article URL — for example to summarise an article, extract facts from a page, verify a claim by reading the source, or convert a web page into plain text to pass to another tool. Pass article URLs returned by web_news_headlines to this tool to read full article content. Do not use this tool to discover current news headlines — use web_news_headlines instead. Does not execute JavaScript — best suited for standard HTML content pages. Will not work with paywalled, login-protected, or JavaScript-rendered single-page applications.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The full public URL to fetch and read. Must include the scheme. Examples: 'https://en.wikipedia.org/wiki/Artificial_intelligence', 'https://www.bbc.com/news/technology-12345678'. Only HTTP and HTTPS URLs are supported. | |
| max_chars | No | Maximum number of characters to return from the page body text. Defaults to 8000. Set higher (up to 50000) for long articles or documents. Set lower for quick headline extraction. The response indicates whether content was truncated. | |
| include_links | No | Whether to include a list of hyperlinks found on the page alongside the text content. Defaults to false. Set to true when the agent needs to discover further URLs to follow, such as when crawling a site or finding references. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Given no annotations, the description fully covers behavioral traits: returns plain text stripped of HTML, navigation, ads; returns title, meta description, word count, main body; does not execute JavaScript; will not work with paywalled or JavaScript-rendered apps; indicates if content truncated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with core functionality first, then use cases, then limitations. While comprehensive, it is not overly verbose; every sentence adds value. Minor redundancy could be trimmed, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explains what is returned (title, meta description, word count, main body). It covers limitations (no JavaScript, paywalls) and optional parameters. Adequate for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description adds significant value: explains max_chars default and how to adjust (higher for long articles, lower for headlines), include_links for crawling, and provides URL examples. This goes well beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool fetches a public web page and returns clean readable plain text. It provides specific use cases (summarize article, extract facts, verify claim) and distinguishes itself from the sibling tool web_news_headlines.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly tells when to use (reading content of a specific URL) and when not to use (not for discovering headlines; use web_news_headlines). It also warns about limitations like no JavaScript, paywalled, or login-protected pages.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!