Inferventis — Financial Data, News & Web MCP
Server Details
Live financial data MCP: FX, crypto, stocks, news, URL reader. x402 on Base: $0.001/call.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Bankee-ai/inferventis-tools
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 19 of 19 tools scored. Lowest: 3.3/5.
Multiple tools have overlapping purposes (e.g., 5 currency conversion tools, 4 crypto/stock price tools, 2 financial calculators, 2 Stripe tools). Although descriptions attempt to differentiate them, an AI agent would likely struggle to select the correct tool, especially with subtle distinctions like data source or response verbosity.
All tools use snake_case naming, but the pattern is inconsistent: some use noun_verb (e.g., crypto_fx_rates, stock_quote) while others use verb_noun (e.g., currency_convert, web_news_headlines). The use of suffixes like 'lite' and 'full' is not uniform (e.g., 'currency_convert_lite' versus 'currency_fx_lite').
19 tools is slightly above the ideal range of 3-15, but the server covers multiple domains (banking, crypto, stocks, forex, payments, news, web). The count is reasonable given the breadth, though some tools are redundant variants that could be consolidated.
The tool set covers the core financial data needs (banking, crypto, stocks, forex, payments) along with news and web reading. It includes a tool finder for discoverability. Exceptions include lack of historical stock data, limited news sources, and no web search capability, but overall it is well-scoped for its stated purpose.
Available Tools
19 toolsbank_accountsAInspect
Retrieves bank account details and recent transaction history via a connected bank API integration. Returns a list of transactions for the specified account, or for all linked accounts when no account ID is provided. Use bank_accounts when an agent needs to inspect account balances, review recent spending, categorise transactions, or reconcile records against a specific bank account. Prefer open_banking_transactions when the integration uses a PSD2 Open Banking provider (TrueLayer) covering 300+ UK and European banks — open_banking_transactions returns richer transaction metadata including merchant names, categories, and running balances. Prefer stripe_payments when the source of payments is a Stripe merchant account rather than a retail bank account. This tool requires a valid bank API credential to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of transactions to return. Defaults to 20. Maximum 100. | |
| account_id | No | Identifier of the specific bank account to query. When omitted, transactions across all linked accounts are returned. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description discloses that the tool requires a valid bank API credential and uses a connected bank API integration. It does not detail error behavior or rate limits, but the read-only nature is implied by the retrieval purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, well-structured, and contains no extraneous information. It front-loads the primary purpose and efficiently integrates usage guidelines.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two optional parameters and no output schema, the description covers purpose, usage with alternatives, and prerequisites. It could mention the output format (list of transactions) but that is evident from the description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining that omitting account_id returns transactions for all linked accounts and noting the default and maximum for limit, which goes beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves bank account details and transaction history, lists specific use cases (inspect balances, review spending, categorize transactions, reconcile records), and explicitly distinguishes itself from sibling tools (open_banking_transactions, stripe_payments).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus when to prefer open_banking_transactions (PSD2 Open Banking) or stripe_payments (Stripe merchant accounts), including specific context about integration types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
crypto_fx_ratesAInspect
Converts an amount between any cryptocurrency and fiat currency pair, or between two cryptocurrencies, using real-time exchange rates sourced from CoinAPI. Supports all crypto-to-fiat (BTC/USD, ETH/EUR), fiat-to-crypto (USD/BTC), and cross-crypto (BTC/ETH) conversions. Use crypto_fx_rates when the conversion involves at least one cryptocurrency and a specific amount must be converted. Prefer crypto_price when only the spot price of a coin in fiat is needed without converting a specific amount, or prefer crypto_price_lite for the same spot price with a minimal response schema. Prefer currency_convert or currency_fx_lite when both currencies are fiat — those tools use ECB/Frankfurter or mid-market rates and do not consume a CoinAPI quota. Requires a CoinAPI key to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target currency code. Accepts ISO 4217 fiat codes (USD, EUR, GBP) or cryptocurrency ticker symbols (BTC, ETH, SOL, XRP). Case-insensitive. | |
| from | Yes | Source currency code. Accepts ISO 4217 fiat codes (USD, EUR, GBP) or cryptocurrency ticker symbols (BTC, ETH, SOL, XRP). Case-insensitive. | |
| amount | Yes | Amount to convert. Must be a positive number. Denominated in the 'from' currency. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry behavioral info. It reveals dependency on CoinAPI and requirement for an API key. Implies quota consumption by contrasting with fiat tools. Could mention response format or error handling, but the safety profile (read-only) is clear enough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: first states core function, second lists supported pairs, third gives usage guidance and alternatives. No fluff, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema or annotations, the description covers purpose, usage, prerequisites, and alternatives. Lacks details on error conditions or output format but is sufficient for the agent to decide to invoke. Acceptable completeness for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3. The description does not add significant parameter-level detail beyond what the schema provides. It briefly mentions allowed code types but no new semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (converts), resource (amount between currency pairs), and source (CoinAPI). It distinguishes from siblings like crypto_price and currency_convert by specifying the scope (at least one cryptocurrency).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides when-to-use: when conversion involves crypto and live CoinAPI rates are needed. Offers alternatives: crypto_price for spot prices without conversion amount, currency_convert for fiat-only conversions. Also notes that fiat-only tools do not consume CoinAPI quota.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
crypto_priceAInspect
Retrieves real-time price data for any cryptocurrency listed on CoinGecko. Returns the current price in any fiat currency, 24-hour percentage change, market capitalisation, and 24-hour trading volume. Supports all major cryptocurrencies including Bitcoin (BTC), Ethereum (ETH), Solana (SOL), XRP, Cardano (ADA), Dogecoin (DOGE), Polygon (MATIC), Chainlink (LINK), Avalanche (AVAX), and 10,000+ additional coins. Use crypto_price when an agent needs the full market picture for a digital asset — price, change, market cap, and volume in one call. Prefer crypto_price_lite when only the spot price and 24h change are needed and a smaller response payload is preferred. Use crypto_fx_rates (via CoinAPI) when converting a specific amount between a cryptocurrency and fiat, or between two cryptocurrencies. Do not use this tool for fiat-to-fiat currency conversion (e.g. USD to EUR) — use currency_convert instead. Do not use when historical price data for a specific past date is required — this tool returns live spot prices only.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Cryptocurrency name or ticker symbol. Accepts common symbols (BTC, ETH, SOL, XRP, ADA, DOGE, MATIC, LINK, AVAX) or full names (bitcoin, ethereum, solana). Case-insensitive. | |
| currency | No | Target fiat currency code for the price. Must be a valid ISO 4217 code. Defaults to USD. Examples: usd, eur, gbp, jpy, chf, aud, cad. | |
| include_market_data | No | Whether to include market capitalisation and 24-hour trading volume alongside the price. Defaults to true. Set to false for a lighter response when only the price is needed. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the data source (CoinGecko) and return fields, but does not disclose potential rate limits, API dependencies, or latency considerations. The description is adequate but could be more transparent about operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single well-structured paragraph with clear sections: purpose, return data, examples, and usage scenarios. Every sentence contributes meaning without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters, no output schema, and no annotations, the description covers purpose, parameters, return values, and use cases comprehensively. It lacks only minor details like error handling or explicit read-only indication.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds some value with examples (coin names, currency codes) and clarifies defaults (USD for currency, true for include_market_data). The improvement over the schema is marginal, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the tool's function: retrieving real-time price data for cryptocurrencies from CoinGecko. It lists the exact data returned (price, 24h change, market cap, volume) and provides a comprehensive list of supported coins, distinguishing it from siblings like crypto_price_basic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases such as getting current price, valuing holdings, monitoring portfolios, and reporting on market conditions. However, it lacks guidance on when not to use the tool or explicit comparison to alternatives like crypto_price_basic or stock_tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
crypto_price_liteAInspect
Retrieves the current spot price and 24-hour change for any cryptocurrency using the CoinGecko public API. Returns price, percentage change, and a timestamp. This is a lightweight variant of crypto_price that omits extended market data (market cap, volume) — use it when only the raw price and 24h direction are needed. Prefer crypto_price when the agent also needs market capitalisation, trading volume, or richer structured output. Use crypto_fx_rates when converting a specific amount between a cryptocurrency and fiat (e.g. 'convert 0.5 BTC to USD') rather than looking up a spot price. Supports all major coins including BTC, ETH, SOL, XRP, ADA, DOGE, and 10,000+ CoinGecko-listed assets. Accepts ticker symbols (BTC, ETH) or full names (bitcoin, ethereum). Target currency defaults to USD but accepts any ISO 4217 code.
| Name | Required | Description | Default |
|---|---|---|---|
| coin | Yes | Cryptocurrency ticker symbol (BTC, ETH, SOL) or full CoinGecko name (bitcoin, ethereum, solana). Case-insensitive. | |
| currency | No | ISO 4217 fiat currency code for the returned price. Defaults to 'usd'. Examples: usd, eur, gbp, jpy. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses using the CoinGecko public API and the return fields (price, percentage change, timestamp). However, it does not mention rate limits, potential errors, or any authentication requirements, leaving some uncertainty for the agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at five sentences, front-loading the core function and then providing distinctions, supported assets, and parameter details. Every sentence adds value with no redundancy or irrelevant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the response includes price, 24-hour change, and timestamp. It covers the essential return structure for a lightweight price tool but could have included an example or note on data format for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. The description adds context by explaining that 'coin' accepts ticker or full name (case-insensitive) and 'currency' defaults to USD with ISO 4217 codes. While helpful, this largely reinforces schema details rather than adding significant new meaning, warranting a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves current spot price and 24-hour change for cryptocurrencies. It distinguishes itself from the sibling tool 'crypto_price' by noting it omits extended market data and has a simpler response schema, making its specific use case very clear.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool versus 'crypto_price': use this when only raw price and direction are needed, and prefer the sibling for market cap, volume, or richer output. Also lists supported assets and acceptable input formats, providing comprehensive guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_convertAInspect
Converts a monetary amount from one currency to another using live exchange rates sourced from the Frankfurter API (European Central Bank data). Returns the converted amount, the exact exchange rate applied, and the timestamp of the rate. Supports 30+ currencies including USD, EUR, GBP, JPY, CHF, AUD, CAD, SEK, NOK, DKK, SGD, HKD, and all major ISO 4217 codes. Use currency_convert when an agent needs to convert prices, invoices, salaries, payments, or any financial figure between fiat currencies in real time with full ECB-backed rate metadata. Prefer currency_convert_lite or currency_fx_lite when only the numeric converted amount and rate are needed without metadata. Use currency_rates when the conversion must use a historical rate from a specific past date. Do not use this tool for cryptocurrency conversion — use crypto_fx_rates (amount conversion via CoinAPI) or crypto_price (spot price lookup via CoinGecko).
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | The target currency code in ISO 4217 format. Must be uppercase. Examples: EUR, JPY, CHF, SGD, HKD, SEK. | |
| from | Yes | The source currency code in ISO 4217 format. Must be uppercase. Examples: USD, GBP, EUR, JPY, CHF, AUD, CAD. | |
| amount | Yes | The monetary amount to convert. Must be a positive number. Decimals are supported. Example: 1500.50 to convert one thousand five hundred and fifty cents. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses live exchange rates and return fields, but lacks details on limitations (e.g., unsupported currencies, precision, rate update frequency). This is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, efficiently stating the core function and usage context with no redundant information. Front-loaded and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description adequately explains return values (converted amount, rate, timestamp). For a simple conversion tool, this covers what an agent needs to know and is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good descriptions for each parameter. The description only adds that any ISO 4217 code pair is accepted, which reinforces but does not significantly extend the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts monetary amounts between currencies using live rates, accepts ISO 4217 codes, and specifies the return values. This distinguishes it from siblings like currency_convert_basic which may not use live rates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool (converting prices, invoices, etc., in real time). However, it does not mention when not to use it or provide explicit alternatives among the many sibling tools (e.g., fx_converter, real_coinapi_fx).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_convert_liteAInspect
Converts a monetary amount between any two fiat currencies using live exchange rates from the Frankfurter API (European Central Bank data). Returns the converted amount and the exchange rate applied. This is a lightweight variant of currency_convert — minimal response without rate timestamp or source attribution. Use currency_convert_lite when only the converted value and rate are needed and ECB/Frankfurter-sourced rates are preferred. Prefer currency_convert when the agent also needs rate timestamp and richer structured output. Prefer currency_fx_lite for the same minimal output (amount + rate) when the ECB data source is not specifically required — both return identical fields but draw from different rate providers. Use currency_rates when a historical rate from a specific past date is required (e.g. accounting, tax, or audit). Use currency_convert_open as a fallback when Frankfurter is unavailable or rate-limited. Does not support cryptocurrency pairs — use crypto_price or crypto_fx_rates for crypto-to-fiat conversions. Accepts all major ISO 4217 currency codes (USD, EUR, GBP, JPY, CHF, AUD, CAD, SGD, NOK, SEK, DKK, PLN, CZK, HUF, etc.).
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ISO 4217 code of the target currency (e.g. 'EUR', 'JPY', 'CHF'). Case-insensitive. | |
| from | Yes | ISO 4217 code of the source currency (e.g. 'USD', 'GBP', 'EUR'). Case-insensitive. | |
| amount | Yes | Monetary amount to convert. Must be a positive number. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully covers behavioral traits: it names the data source (Frankfurter API/ECB), specifies what is returned (converted amount and rate), and what is omitted (timestamp, metadata). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the core action first, but it is slightly verbose with multiple examples and comparisons. Most sentences add unique value, though some could be condensed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema, the description adequately explains what the tool returns (converted amount and rate) and what it does not (timestamp, metadata). It also covers usage context, alternatives, and exclusions, making it complete for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for all three parameters. The description adds marginal value by listing example currency codes and stating case-insensitivity, but does not significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool converts monetary amounts between fiat currencies using live exchange rates, and distinguishes it from the sibling 'currency_convert' as a lightweight variant. It also specifies it does not support crypto pairs, directing agents to crypto tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly recommends using this variant when only the converted value and rate are needed and a smaller payload is preferred, and directs to 'currency_convert' when richer output is required. It also warns against using for cryptocurrency, providing alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_convert_openAInspect
Converts a monetary amount between two fiat currencies using live exchange rates from an open currency exchange API. Returns the converted amount and the rate applied. Use currency_convert_open as an alternative live-rate source when currency_convert (Frankfurter/ECB) or currency_fx_lite are unavailable or rate-limited. The underlying source is an open public exchange rate feed suitable for informational use. Prefer currency_convert or currency_rates when ECB-auditable Frankfurter rates are required for accounting or compliance. Prefer currency_convert_lite for the same minimal output (amount + rate) backed by ECB/Frankfurter rates. Prefer currency_fx_lite for lightweight mid-market conversions. Does not support cryptocurrency pairs — use crypto_fx_rates for any conversion involving a digital asset.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ISO 4217 target currency code (e.g. 'EUR', 'JPY', 'CHF'). Case-insensitive. | |
| from | Yes | ISO 4217 source currency code (e.g. 'USD', 'GBP', 'EUR'). Case-insensitive. | |
| amount | Yes | Monetary amount to convert. Must be a positive number. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full behavioral burden. It discloses the source (open public feed), return values (converted amount and rate), and constraints (no crypto, informational use). However, it does not explicitly state this is a read-only operation with no side effects, which would be expected given the lack of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with no filler sentences. It front-loads the core purpose, then immediately provides usage guidelines and exclusions. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-param conversion tool with no output schema, the description covers purpose, usage guidelines, behavioral constraints, and exclusions. It is fully adequate for correct agent selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with adequate property descriptions. The description adds context about live rates and the open source, but does not provide additional semantic detail beyond what the schema already offers. Baseline is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('converts', 'returns') and identifies the resource ('monetary amount between two fiat currencies using live exchange rates'). It clearly distinguishes from siblings by naming currency_convert and currency_fx_lite, and specifying different use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool (alternative when others are unavailable/rate-limited) and when to prefer siblings (currency_convert for compliance, currency_fx_lite for lightweight, crypto_fx_rates for crypto). No ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_fx_liteAInspect
Converts a foreign exchange (FX) amount between two fiat currencies using live mid-market rates. Returns the converted amount and the exchange rate applied. Use currency_fx_lite when only the numeric result is required and the ECB/Frankfurter data source is not specifically needed. Prefer currency_convert when richer metadata (rate timestamp, ECB-backed Frankfurter source) is needed. Prefer currency_convert_lite for the same minimal output (amount + rate) when ECB/Frankfurter rates are specifically required — both tools return identical fields but draw from different rate providers. Use currency_rates when a historical rate from a specific past date is required. Use currency_convert_open as an alternative open-rate source. Does not support cryptocurrency pairs — use crypto_fx_rates for crypto-to-fiat or crypto-to-crypto conversions, or crypto_price_lite for a spot price lookup. Accepts all major ISO 4217 currency codes.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ISO 4217 target currency code (e.g. 'EUR', 'CHF', 'AUD', 'CAD'). Case-insensitive. | |
| from | Yes | ISO 4217 source currency code (e.g. 'USD', 'GBP', 'EUR', 'JPY'). Case-insensitive. | |
| amount | Yes | Monetary amount to convert. Must be a positive number. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses use of live mid-market rates, returns converted amount and rate, and excludes crypto. Lacks details on rate freshness or limits, but adequate with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with purpose, then usage guidelines. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Describes output and constraints, but lacks explicit output schema structure. However, the description is adequate for a lightweight conversion tool with simple return.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers all parameters with descriptions. Description adds that currency codes are ISO 4217 and case-insensitive, but schema already conveys meaning with 100% coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool converts FX amounts between fiat currencies using live mid-market rates. Distinguishes from siblings by naming alternatives for richer metadata or crypto.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use this vs currency_convert, currency_convert_lite, and crypto_fx_rates. Provides clear context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
currency_ratesAInspect
Retrieves live and historical fiat currency exchange rates from the Frankfurter API (sourced from the European Central Bank). Supports both real-time conversion and historical rate lookup for any past date, making it the preferred tool when auditable, ECB-sourced rate data is required. Use currency_rates when a rate from a specific past date is required (e.g. accounting, tax, or audit), or when the ECB source must be documented. Prefer currency_convert when only a live conversion is needed with a richer structured response. Prefer currency_convert_lite for lightweight live ECB conversions without historical requirements. Prefer currency_fx_lite when ECB sourcing is not required and only a lightweight live result is needed. Use currency_convert_open when a non-ECB rate source is acceptable and Frankfurter is unavailable. Does not support cryptocurrency pairs — use crypto_fx_rates for any conversion involving a digital asset.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ISO 4217 target currency code (e.g. 'EUR', 'JPY', 'CHF'). Case-insensitive. | |
| from | Yes | ISO 4217 source currency code (e.g. 'USD', 'GBP', 'EUR'). Case-insensitive. | |
| amount | Yes | Monetary amount to convert. Must be a positive number. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses data source and capabilities (live and historical). As a read-only tool, it is adequately transparent. Does not mention rate limits or authentication, but that is not required for transparency trust.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph of about 100 words, front-loaded with main purpose. Efficiently includes sibling differentiation. However, the misleading historical claim detracts from structural integrity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema present, and description does not describe return structure. Missing date parameter for historical lookup is a significant gap. Completeness is adequate for a simple read tool but not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of actual parameters with descriptions. However, description claims 'historical rate lookup for any past date' but schema has no date parameter, creating misleading expectation. This contradiction reduces clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Retrieves live and historical fiat currency exchange rates' with specific source (ECB via Frankfurter API). Distinguishes from siblings by naming currency_convert, currency_convert_lite, currency_fx_lite, and crypto_fx_rates for different use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (auditable, historical, ECB source) and when to prefer alternatives (live rich response, lightweight, crypto). Also states 'Does not support cryptocurrency pairs' with alternative tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
financial_calculatorBInspect
Performs precise financial calculations across six calculation types entirely locally with no external API dependency. compound_interest computes the final value and total interest earned on a principal over time at a given annual rate. loan_repayment calculates the monthly payment, total repayable amount, and total interest for a mortgage or loan given the principal, annual rate, and term in months. roi returns return on investment as a percentage and absolute profit or loss, with optional annualised ROI when a holding period is provided. present_value discounts a future cash amount back to its current value using a discount rate. future_value projects a present amount forward at a compounding annual rate. break_even finds the unit volume and revenue at which fixed and variable costs are fully covered by sales. Use this tool when an agent needs to perform any structured financial calculation — loan affordability, investment return, discounted cash flow, or cost analysis. Prefer financial_calculator_lite when only the single headline result is needed rather than a full structured breakdown. Do not use this tool to fetch live market prices or exchange rates — use stock_quote for stock prices, crypto_price for cryptocurrency prices, or currency_convert for FX rates.
| Name | Required | Description | Default |
|---|---|---|---|
| rate | No | Annual interest or discount rate expressed as a percentage. Example: 5 for 5% per year. Used for compound_interest, loan_repayment, present_value, and future_value. | |
| periods | No | Number of time periods. For compound_interest and future_value: years. For loan_repayment: months. For roi: years used for annualised ROI. | |
| principal | No | Starting amount or loan amount in your chosen currency. Used for compound_interest, loan_repayment, and future_value calculations. | |
| final_value | No | Ending value of an investment or asset. Used for roi and present_value calculations. | |
| fixed_costs | No | Total fixed costs of the business or project that do not vary with output volume. Used for break_even calculations. | |
| discount_rate | No | Annual discount rate as a percentage for present value calculations. Represents the required rate of return or cost of capital. | |
| future_amount | No | The target future cash amount to discount back to present value. Used for present_value calculations. | |
| initial_value | No | Starting value of an investment or asset. Used for roi calculations. | |
| price_per_unit | No | The selling price per unit of product or service. Used for break_even calculations. | |
| calculator_type | Yes | The type of financial calculation to perform. Must be one of: compound_interest, loan_repayment, roi, present_value, future_value, break_even. | |
| variable_cost_per_unit | No | The variable cost incurred for each unit produced or sold. Used for break_even calculations. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must disclose behavioral traits. It states calculations are 'local with no external API dependency' but fails to mention auth needs, rate limits, or potential side effects. As a calculator, safety is assumed, but transparency is thin.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured, starting with the general purpose, then detailing each calculation type in a clear, front-loaded paragraph. Every sentence provides unique information without repetition or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of an output schema and annotations, the description should fully explain return values and constraints. It names the outputs for each calculation type (e.g., 'final value and total interest') but does not specify the exact JSON keys or structure, which may confuse an agent parsing the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage with detailed parameter meanings (e.g., 'rate: Annual interest or discount rate...'). The tool description adds no additional parameter insight beyond listing which calculation uses which parameters, which is already in the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs six specific financial calculations, listing each with a precise verb and resource (e.g., 'compound_interest computes the final value and total interest'). It distinguishes itself from sibling 'financial_calculator_basic' by implying a more comprehensive offering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives like 'financial_calculator_basic' or when not to use it. The description only implies usage for financial calculations, lacking exclusion criteria or alternative references.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
financial_calculator_liteAInspect
Performs common financial calculations locally with no external API dependency. Supports compound interest, loan repayment, return on investment (ROI), present value, future value, and break-even analysis. Returns a single numeric result for the requested calculation type. This is a lightweight variant of financial_calculator — it returns only the result number rather than a full structured breakdown (monthly payment, total interest, annualised ROI, etc.). Use financial_calculator_lite when only the headline figure is needed. Prefer financial_calculator when the agent needs a full breakdown, multiple sub-values, or labelled output fields for compound interest earned, total repayable, or annualised returns. Neither this tool nor financial_calculator fetches live market data — for live prices use stock_quote, crypto_price, or currency_convert.
| Name | Required | Description | Default |
|---|---|---|---|
| rate | No | Annual interest or discount rate as a decimal (e.g. 0.05 for 5%). Required for compound_interest, loan_repayment, present_value, and future_value. | |
| periods | No | Number of compounding periods (years) or loan term in months depending on calculator_type. Required for compound_interest, loan_repayment, future_value, and present_value. | |
| principal | No | Starting amount or loan principal in currency units. Required for compound_interest, loan_repayment, present_value, and future_value. | |
| final_value | No | Final investment value. Required for roi calculations. | |
| initial_value | No | Initial investment amount. Required for roi calculations. | |
| calculator_type | Yes | Type of calculation to perform. Accepted values: 'compound_interest', 'loan_repayment', 'roi', 'present_value', 'future_value', 'break_even'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses local computation, no external API dependency, and return of a single number. It could be more explicit about non-destructive nature, but the calculator context implies safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient paragraph. It is front-loaded with the core purpose and immediately distinguishes from siblings. Slightly verbose with listing all calculation types, but overall concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters, no output schema, and no annotations, the description covers purpose, parameter usage by calculation type, sibling differentiation, and external tool references. It does not explain the output format beyond 'single numeric result', but that is sufficient for a lightweight calculator.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3. The description adds value by specifying which parameters are required for each calculator_type, e.g., 'rate required for compound_interest, loan_repayment...' This goes beyond the schema's per-parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs common financial calculations locally, lists specific calculation types (compound interest, loan repayment, etc.), and distinguishes itself from the sibling financial_calculator by noting it returns only a single numeric result instead of a full breakdown.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use this tool (headline figure only) versus financial_calculator (full breakdown). Also advises using stock_quote, crypto_price, or currency_convert for live market data, providing clear alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
open_banking_transactionsAInspect
Retrieves bank account balances and transaction history via PSD2 Open Banking (TrueLayer), covering 300+ UK and European banks. Returns the account balance, ISO 4217 currency code, and up to 100 recent transactions — each with date, merchant description, amount, and category. Supports optional date filtering to narrow the transaction window. Use this tool when an agent needs to inspect a user's spending history, verify a payment has cleared, assess account affordability, categorise recent bank transactions, or produce a financial summary from live bank data. Do not use for payment initiation — this tool is strictly read-only. Do not use for Stripe-specific payment records, subscription billing, or failed charge investigation — use stripe_payments instead. Requires a TrueLayer access token; returns structured mock data if no token is configured.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of transactions to return. Integer between 1 and 100. Defaults to 10. | |
| from_date | No | Optional start date for filtering transactions. ISO 8601 date format: YYYY-MM-DD. Example: 2026-01-01 to retrieve transactions from 1 January 2026 onwards. | |
| account_id | No | TrueLayer account ID to retrieve transactions for. Obtain from a prior account-listing call. Omit to return data for the first connected account. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses real-time nature, PSD2 compliance, and that it returns categorized transactions, merchant names, amounts, and current balance. No annotations exist, so description carries full burden. Does not mention rate limits or authentication, but provides key behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with action. First sentence states purpose, second adds scope, third lists use cases and outputs. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description explains return includes categorized transactions, amounts, balance. Parameter descriptions are complete. Lack of error handling or pagination info is minor for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds value: default and max for limit, ISO format for from_date, and behavior when account_id omitted (returns first available account). This goes beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves real-time bank account transaction history and balance data via Open Banking APIs. Specifies the resource (bank transactions) and action (retrieves) with scope (300+ UK/European banks). Though it does not explicitly distinguish from siblings like stock_tool or crypto_price, the domain is distinct enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists specific use cases: analyze spending patterns, verify account balances, assess affordability, retrieve financial history. Explicitly says when to use. Does not mention when not to use or alternatives, but given sibling tools are in different domains, this is acceptable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
platform_tool_finderAInspect
Discovers the most relevant tools available on this MCP server for a given task using local semantic search (MiniLM-L6-v2 embeddings). Accepts a plain-English description of what needs to be accomplished and returns the best matching tools ranked by relevance, along with their input schemas, pricing tier, and exact call instructions. Use this tool first when you are connected to this server but do not know which specific tool to call — describe your goal and let platform_tool_finder identify the right capability. Do not use this tool if you already know the tool name — call that tool directly instead. Returns up to 10 results ranked by semantic similarity score.
| Name | Required | Description | Default |
|---|---|---|---|
| top_k | No | How many ranked results to return. Integer between 1 and 10. Defaults to 3. Use a higher value when the best tool is ambiguous. | |
| intent | Yes | Plain-English description of what you need to accomplish. Be specific about the goal, not the tool name. Example: 'I need to convert 500 dollars to euros at the current exchange rate' or 'get the latest technology news headlines'. | |
| model_hint | No | Optional: the name of the calling LLM model. Examples: claude, gpt-4, gemini. Used to apply model-specific manifest variant ranking when available. Omit if unknown. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It explains the behavior: local semantic search with MiniLM-L6-v2, returning up to 10 results ranked by relevance, along with schemas, pricing, and call instructions. It does not mention rate limits or latency, but the core behavior is well-described. A minor gap is the lack of detail on what 'exact call instructions' entails.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately long but every sentence serves a purpose. It starts with the core function, then usage advice, and ends with output details. It is well-structured and front-loaded. Slightly verbose in the middle ('Returns up to 10 results...') but not overly so; still earns a 4 for efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a meta-discovery tool) and the absence of an output schema, the description is comprehensive. It explains inputs, how it works, and what the output contains (tools, schemas, pricing, instructions). It does not detail the output format (e.g., is it JSON or text?), but the mention of 'ranked by relevance' and 'exact call instructions' provides sufficient context for an agent to understand what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (all three parameters described). The description adds significant value: for 'intent' it provides concrete examples, for 'top_k' it explains when to use higher values, and for 'model_hint' it clarifies an optional role. This goes beyond the schema's basic descriptions, enhancing semantic understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: discovering relevant tools via semantic search on this MCP server. It specifies the embedding model (MiniLM-L6-v2), input (plain-English description), and output (ranked tools with schemas). It distinguishes itself from sibling tools by advising use when the agent does not know which tool to call, making the purpose unmistakable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this tool first when you are connected to this server but do not know which specific tool to call' and 'Do not use this tool if you already know the tool name — call that tool directly instead.' It also advises on adjusting top_k for ambiguous cases, offering clear when-to-use and when-not-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stock_price_liteAInspect
Retrieves the current trading price for a publicly listed stock by ticker symbol. Returns the current price as a single numeric value. This is a lightweight variant of stock_quote — it omits intraday high/low, percentage change, previous close, company name, sector, and exchange metadata. Use stock_price_lite when only the raw current price is needed for a quick lookup or calculation. Prefer stock_quote when the agent also needs price change, intraday range, company information, or a fully structured response suitable for portfolio reporting. Does not support cryptocurrency prices — use crypto_price for full market data (price, volume, market cap) or crypto_price_lite for a lightweight spot price lookup.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Stock ticker symbol in uppercase. Examples: AAPL (Apple), MSFT (Microsoft), TSLA (Tesla), GOOGL (Alphabet), AMZN (Amazon), NVDA (Nvidia). Must be a valid exchange-listed ticker. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, but description details the limited output (single numeric value) and what is omitted. Does not specify if data is real-time or delayed, but overall transparent about its lightweight nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured: purpose first, then usage differentiation, then crypto exclusion. Could be slightly shorter, but no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (one param, simple output), description is adequate. Explains return value and usage boundaries (no crypto). No output schema, so description suffices.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one parameter (symbol) with schema coverage 100%. Description adds value by providing examples, uppercase requirement, and valid ticker examples, though schema already clearly defines it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves current trading price for a stock by ticker symbol. Distinguishes itself from sibling tools stock_quote and crypto_price, and explicitly lists what it omits compared to stock_quote.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs when to use this tool (only need raw price) versus stock_quote (need more data) and crypto_price (for crypto). Provides clear alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stock_quoteAInspect
Retrieves real-time stock price quotes and company information for any publicly traded company via the Finnhub API. Returns current price, intraday high and low, percentage change from previous close, previous close price, sector, and exchange. Use stock_quote when an agent needs to look up a stock price, check intraday market performance, retrieve company sector data, monitor equity portfolio values, or answer any question about the current trading price of a publicly listed company. Prefer stock_quote over stock_price_lite when the agent needs price change, intraday range, company name, or sector — stock_price_lite returns only the raw current price with no additional context. Do not use for cryptocurrency prices — use crypto_price (CoinGecko, 10,000+ assets) or crypto_price_lite for a lightweight variant. Do not use for fiat currency conversion — use currency_convert or currency_fx_lite. Requires a Finnhub API key to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Stock ticker symbol in uppercase. Examples: AAPL (Apple), MSFT (Microsoft), TSLA (Tesla), GOOGL (Alphabet), AMZN (Amazon), NVDA (Nvidia). Must be a valid exchange-listed ticker symbol. | |
| include_company_info | No | Whether to include company name, sector, and exchange details alongside the price quote. Defaults to true. Set to false for a faster, price-only response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description bears full burden. It mentions real-time nature, intraday data, and dependency on a configured Finnhub API key. However, it does not address rate limits, data freshness guarantees, or error handling for invalid symbols, which are relevant behavioral aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured: first sentence states core purpose and return values, following sentences give usage guidance and exclusions. Each sentence earns its place; no redundant info. Appropriate length for a tool with moderate complexity and multiple sibling tools.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description lists all major return fields (current price, intraday high/low, percent change, previous close, sector, exchange). Covers input details, usage context, and exclusions. Adequately complete for a stock quote tool with a straightforward purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
100% schema coverage, but description adds value beyond schema: provides stock symbol examples, explains default for include_company_info (true), and gives a performance hint (set to false for faster response). This goes beyond the schema's minimal descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description starts with a clear verb ('Retrieves real-time stock price quotes and company information'), specifies the resource (publicly traded companies via Finnhub API), and lists return fields. It distinguishes from sibling tool stock_price_lite and other domain tools for crypto and currency.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('when an agent needs to look up a stock price...') and when not to use (cryptocurrency, fiat conversion), providing alternative tool names. Also distinguishes from stock_price_lite based on needs for additional data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stripe_payment_recordsAInspect
Retrieves payment and charge records from a Stripe merchant account. Returns a list of payment records filtered by the requested query type. Use stripe_payment_records when an agent needs to review recent charges, refunds, disputes, or subscription payments from a Stripe account. This is a lightweight variant of stripe_payments — it returns a simple records array rather than the full structured Stripe response with customer details, metadata, and pagination cursors. Prefer stripe_payments when the agent needs complete Stripe charge objects including customer IDs, payment method details, metadata fields, and processing status breakdowns. Prefer open_banking_transactions or bank_accounts when the payment data source is a bank account rather than a Stripe merchant account. Requires a Stripe API key to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of records to return. Defaults to 10. Maximum 50. | |
| query_type | Yes | Type of payment records to retrieve. Accepted values: 'recent_payments' (latest charges), 'refunds' (refund records), 'disputes' (disputed charges), 'subscriptions' (recurring subscription payments). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States return type (simple records array vs full structured response) and API key requirement. Lacks details on errors, exact response format, or mutability, but basic behavioral context is present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose; each sentence adds value. Slightly verbose in usage guidance section, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains return type contrast with stripe_payments. Adequate for a simple tool with 2 parameters, though record structure is not detailed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with clear descriptions. Description only reiterates 'filtered by query type' without adding new meaning. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb+resource: 'Retrieves payment and charge records'. Distinguishes from sibling stripe_payments (lightweight variant) and bank account tools (open_banking_transactions, bank_accounts).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit when to use (reviewing charges, refunds, disputes, subscriptions) and when not to (prefer stripe_payments for full objects, prefer bank account tools for bank data). Mentions Stripe API key prerequisite.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stripe_paymentsAInspect
Retrieves live payment data from a Stripe account via the Stripe API. Supports four query types: recent_payments returns the latest payment intents with status, amount, and currency; failed_charges returns declined or failed charges with failure reasons and error codes; customers returns customer records with name, email, and payment method details; subscriptions returns active and cancelled subscription plans with billing interval and status. Use stripe_payments when an agent needs to investigate payment failures, audit recent transaction activity, retrieve customer billing records, or check subscription status within a Stripe account — it returns full Stripe charge objects with customer IDs, metadata, and processing details. Prefer stripe_payment_records for a lighter-weight Stripe query returning only a simple records array without full Stripe object structure. Do not use for bank account transactions or PSD2 Open Banking data — use open_banking_transactions instead. Do not use for generic bank account history — use bank_accounts instead. Requires a Stripe secret key to be configured on the server.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of records to return. Integer between 1 and 25. Defaults to 10. | |
| query_type | Yes | The type of Stripe data to retrieve. Must be one of: recent_payments (latest payment intents), failed_charges (declined or failed charges with failure reasons), customers (customer records with contact and payment method info), subscriptions (billing subscriptions with status and plan details). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full burden. It indicates read-only behavior ('retrieves', 'returns') but does not disclose authentication needs, rate limits, or potential side effects. The description is adequate but lacks depth on operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, front-loading the main purpose. Every sentence adds value without redundancy. The structure is efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 2 parameters, no output schema, and no annotations. The description covers main use cases and data types (failure reasons, customer details, billing info). It could hint at response format, but the listed scenarios provide sufficient context for an agent to decide invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with clear descriptions for limit and query_type. The description summarizes query_type options but adds no additional syntax or format details beyond the schema. Baseline 3 is appropriate; the description provides marginal extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves live Stripe payment data, enumerating specific data types (payment intents, failed charges, customers, subscriptions) and use cases. It uses specific verbs like 'retrieves', 'investigate', 'audit', distinguishing it from sibling tools like bank_data or crypto_price which cover other domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly lists when to use the tool: 'investigate payment failures... audit recent transactions...' providing clear context. It does not mention when not to use it or alternative tools, but the scenarios are specific enough to guide selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
web_news_headlinesAInspect
Retrieves the latest real-time news headlines and article summaries from BBC News and The Guardian across nine topic categories. Returns structured articles with headline, description, source name, article URL, and publication date — sorted most recent first. No API key required. Use this tool when an agent needs current news about a specific topic, wants to summarise today's headlines, needs to research recent events, monitor a subject area for new developments, or build a news briefing. Do not use this tool to read the full content of a specific article — use web_url_reader instead, passing the article URL returned by this tool. Do not use when news from sources outside BBC News and The Guardian is required.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of articles to return. Integer between 1 and 20. Defaults to 10. Use a lower number for a quick briefing and a higher number for comprehensive coverage. | |
| topic | No | News topic category to retrieve headlines for. Accepted values: 'general' (top stories across all categories), 'technology' (tech industry and digital news), 'business' (markets, economy, and corporate news), 'science' (research, environment, and discoveries), 'health' (medical and public health news), 'politics' (government and political news), 'sport' (sports and athletics), 'ai' (artificial intelligence and machine learning news), 'world' (international and global news). Defaults to 'general' if omitted or unrecognised. | |
| keyword | No | Optional search keyword to filter headlines. Only articles whose title or description contains this keyword will be returned. Case-insensitive. Examples: 'OpenAI', 'interest rates', 'climate change'. Omit to return all headlines for the chosen topic. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses sources, sorting order, and that no API key is required. Could mention rate limits or error handling (e.g., if sources are unavailable) but overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise with no wasted words. Front-loaded with purpose, followed by usage guidance, then exclusions. Each sentence contributes meaningfully.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description fully explains return structure. Covers all 3 optional parameters with clear defaults. For a simple news retrieval tool with no required params and no nested objects, the description is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed parameter descriptions. Description adds value beyond schema by explaining sorting ('most recent first') and default behavior for omitted parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb (retrieves), resource (news headlines from BBC and The Guardian), and scope (across nine topic categories, sorted recent first). It distinguishes from sibling web_url_reader by specifying that this tool does not read full articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (current news, summarize, research, monitor, briefing) and when not to use (full article content, other sources). Names alternative tool web_url_reader and explains how to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
web_url_readerAInspect
Fetches any public web page and returns clean, readable plain text stripped of HTML, navigation, scripts, advertisements, and boilerplate. Returns the page title, meta description, word count, and main body text ready for analysis or summarisation. Use this tool when an agent needs to read the content of a specific web page or article URL — for example to summarise an article, extract facts from a page, verify a claim by reading the source, or convert a web page into plain text to pass to another tool. Pass article URLs returned by web_news_headlines to this tool to read full article content. Do not use this tool to discover current news headlines — use web_news_headlines instead. Does not execute JavaScript — best suited for standard HTML content pages. Will not work with paywalled, login-protected, or JavaScript-rendered single-page applications.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The full public URL to fetch and read. Must include the scheme. Examples: 'https://en.wikipedia.org/wiki/Artificial_intelligence', 'https://www.bbc.com/news/technology-12345678'. Only HTTP and HTTPS URLs are supported. | |
| max_chars | No | Maximum number of characters to return from the page body text. Defaults to 8000. Set higher (up to 50000) for long articles or documents. Set lower for quick headline extraction. The response indicates whether content was truncated. | |
| include_links | No | Whether to include a list of hyperlinks found on the page alongside the text content. Defaults to false. Set to true when the agent needs to discover further URLs to follow, such as when crawling a site or finding references. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that it does not execute JavaScript, is best for standard HTML, and will not work with paywalled, login-protected, or JS SPAs. Mentions truncation indication for max_chars. Lacks error handling details, but annotations are absent so description carries full burden.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Every sentence earns its place. The description is well-structured: core function, return values, use cases, differentiation from sibling, and limitations. No redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description sufficiently explains what is returned (title, meta description, word count, body text). Covers when to use and limitations. Could mention error handling but not critical for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining max_chars defaults (8000) and maximum (50000), and include_links usage ('set to true when crawling'). Also provides URL examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches any public web page and returns clean plain text. It lists specific use cases (summarize, extract facts, verify claims) and distinguishes itself from sibling web_news_headlines by explicitly stating not to use for news headline discovery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use (reading specific web pages, summarizing, extracting facts) and when-not-to-use (not for discovering news headlines, not for paywalled/JS-rendered pages). Also guides to pass article URLs from web_news_headlines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!