Skip to main content
Glama

Rozbij Bank - Polish bank offers

Ownership verified

Server Details

Search and compare Polish bank offers in real time. Find the best savings accounts, deposits, personal accounts, and business accounts. Browse active bank promotions with expert analysis of hidden fees and traps. Get referral codes for bonuses and calculate deposit interest with tax. 11 tools covering all Polish banking products.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

11 tools
CalculateDepositInterestCalculate Deposit Interest with TaxA
Read-onlyIdempotent
Inspect

Calculate deposit/savings interest with Polish Belka tax (19% capital gains tax). Returns gross interest, tax amount, net interest and final amount.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesThe deposit amount in PLN
monthsYesDeposit period in months
annualRateYesAnnual interest rate in percent (e.g. 5.5 for 5.5%)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety profile; description adds valuable behavioral context including specific tax jurisdiction (Polish Belka), exact tax rate (19%), and return value structure (gross interest, tax amount, net interest, final amount) which is critical given no output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first sentence establishes operation and domain specifics (tax type), second sentence details return structure. Front-loaded with essential context (Polish tax specific) that prevents misuse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a calculation utility with simple numeric inputs, description adequately compensates for missing output schema by enumerating four specific return components. Could be improved by mentioning calculation methodology (simple vs compound interest) or day-count conventions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all three parameters (amount, annualRate, months) fully documented. Description does not add syntax details or format constraints beyond the schema, warranting baseline score of 3 for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb 'Calculate' with clear resource 'deposit/savings interest' and distinguishes domain with 'Polish Belka tax (19% capital gains tax)'. Clearly differs from sibling tools like SearchDeposits or GetBanks which are retrieval-oriented rather than computational.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use or when-not-to-use guidance provided, nor are alternatives named. However, the naming convention ('Calculate' vs 'Search'/ 'Get' siblings) provides implied usage context that this is for computation rather than lookup.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GetBanksGet Available BanksA
Read-onlyIdempotent
Inspect

Get a list of all banks available on RozbijBank platform. Returns bank titles (use these exact names as bankName filter in SearchPersonalAccounts, SearchSavingsAccounts, SearchDeposits, SearchBusinessAccounts, and SearchPromotions). The bankName filter uses substring matching, so partial names work too.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only/idempotent safety. The description adds valuable behavioral context beyond annotations: it specifies the return value is 'bank titles' (not IDs or codes), and crucially explains that the bankName filter uses 'substring matching'—a key behavioral detail for how results are consumed by sibling tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero redundancy. First sentence establishes purpose, second explains integration with sibling tools, third clarifies matching behavior. Front-loaded with the essential verb-resource pair and no filler words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter reference tool, the description adequately covers purpose, return value type (bank titles), and usage patterns with siblings. While an output schema is absent, the description compensates by explaining what is returned. Minor gap: no mention of potential pagination or empty result handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, which per guidelines establishes a baseline of 4. The description appropriately does not invent parameters, and the vacuous 100% schema coverage for an empty object requires no additional parameter explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'a list of all banks available on RozbijBank platform' using specific verb and resource. It distinguishes itself from sibling Search* tools by clarifying it provides the reference bank names used for filtering in those tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly defines when to use by stating to 'use these exact names as bankName filter in SearchPersonalAccounts, SearchSavingsAccounts...', naming five specific sibling tools. Also clarifies that partial names work due to substring matching, providing clear usage constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GetBlogArticlesGet Blog ArticlesB
Read-onlyIdempotent
Inspect

Get published blog articles from RozbijBank. Articles cover banking tips, promotion guides, and financial education.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive), so the description appropriately focuses on domain context—describing what content the articles cover. However, it omits behavioral details like pagination limits, whether it returns full article text or just metadata, caching behavior, or scope (all articles vs. recent only) that would help an agent handle the response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero redundancy. First sentence establishes action and source; second sentence adds domain context. Every word earns its place and the action is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description does not specify what the tool returns (article list, full content, IDs?) or structure. While the tool is simple (read-only, zero params), the absence of return value documentation combined with no usage guidelines leaves gaps for an agent attempting to integrate this into a workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, which establishes a baseline of 4 per the scoring rubric. The schema coverage is 100% (trivially true for empty schema), and the description correctly implies no filtering is available.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') and resource ('published blog articles') with source specified ('RozbijBank'). The second sentence clarifies content domain (banking tips, promotion guides, financial education) which distinguishes it from sibling financial query tools like SearchDeposits or CalculateDepositInterest. However, it lacks explicit differentiation from GetPromotionDetails which also retrieves promotional information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like GetPromotionDetails or SearchPromotions. It does not indicate if this retrieves static content versus searchable promotional data, or whether it should be used before or after other discovery tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GetPromotionDetailsGet Promotion DetailsA
Read-onlyIdempotent
Inspect

Get detailed information about a specific bank promotion, including step-by-step tasks, hidden fees, traps, and expert recommendation.

ParametersJSON Schema
NameRequiredDescriptionDefault
promoIdYesThe promotion ID (from SearchPromotions results)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only/idempotent/safe operation. The description adds valuable context about the nature of data returned—specifically that it includes critical analysis like hidden fees and traps, not just promotional marketing material.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single well-structured sentence that efficiently lists four specific content categories (tasks, fees, traps, recommendations) without redundancy or filler. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only lookup with one parameter and no output schema, the description adequately explains what data is returned. Could mention error cases (e.g., invalid promoId) but sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the single parameter is well-documented in the schema itself. The description does not add additional parameter syntax or semantic details beyond the schema's 'promotion ID' definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' and resource 'bank promotion', and distinguishes from sibling SearchPromotions by specifying detailed content types returned (step-by-step tasks, hidden fees, traps, expert recommendations) rather than just basic listings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying it retrieves details for a 'specific' promotion, suggesting prior identification needed, but lacks explicit guidance on when to use versus SearchPromotions or prerequisites like obtaining the promoId first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GetReferralCodeGet Bank Referral CodeA
Read-onlyIdempotent
Inspect

Get a referral/invitation code for a specific bank. Users can use these codes when opening new accounts to get bonuses.

ParametersJSON Schema
NameRequiredDescriptionDefault
bankNameYesThe bank name to get a referral code for (e.g. 'mBank', 'ING')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is read-only, idempotent, and non-destructive. The description adds domain context explaining what referral codes are for (account opening bonuses) but doesn't disclose additional behavioral details like error cases (e.g., invalid bank names), rate limits, or cache behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence front-loads the core action and resource; the second sentence provides just enough context about usage. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (single required string parameter), full schema coverage, and comprehensive annotations, the description is complete. While no output schema exists, the return value is reasonably implied by 'referral/invitation code', though explicitly stating the return format would strengthen it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'bankName' parameter, the schema already documents the input requirements fully. The description mentions 'for a specific bank' which aligns with the parameter but doesn't add syntax details, format constraints, or examples beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (get) and resource (referral/invitation code) with scope (for a specific bank). It distinguishes from sibling tools like GetPromotionDetails and SearchPromotions by focusing specifically on referral codes rather than general promotional offers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when users would want this tool ('when opening new accounts to get bonuses'), establishing the value proposition. However, it doesn't explicitly contrast with alternatives like GetPromotionDetails for cases where general promotions might be more appropriate than referral codes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GetTodayDateGet Today's DateA
Read-onlyIdempotent
Inspect

Returns today's date in Warsaw timezone (Europe/Warsaw). Use this when you need to know the current date, e.g. to check promotion deadlines.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnly/idempotent), but description adds crucial behavioral context: the specific timezone (Europe/Warsaw). This is significant added value not inferable from the name or annotations. Does not specify date format or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with core functionality (Warsaw date), followed by usage context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter, read-only utility. Combines with strong annotations to convey complete behavior, though return value format (string vs object) remains unspecified given lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present. Per scoring rules, 0 params establishes a baseline of 4. Description correctly requires no parameter explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly specifies the action ('Returns'), resource ('today's date'), and critical constraint ('Warsaw timezone'), distinguishing it from banking/search siblings like CalculateDepositInterest or SearchPromotions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit positive guidance ('Use this when you need to know the current date') and a concrete domain-specific example ('e.g. to check promotion deadlines'). Lacks explicit 'when not to use' or named alternatives, though the example effectively contextualizes among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

SearchBusinessAccountsSearch Business AccountsA
Read-onlyIdempotent
Inspect

Search and compare business bank accounts (konta firmowe) in Poland. Returns account details for entrepreneurs and companies. Call with no filters to get all business accounts sorted by score.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return (1-50). Default 10.
sortByNoSort by: 'score' (default, overall ranking), 'reward' (highest bonus/reward first).score
bankNameNoFilter by bank name (substring match, case-insensitive). Use GetBanks to see all available bank names. Leave empty to show all banks.
minScoreNoMinimum total score (0-100). Default 0 = no filter.
customerTypeNoFilter by customer type. Aliases: 'JDG' or 'firma' = jednoosobowa działalność gospodarcza, 'spółka' = all company types (cywilna, jawna, partnerska, komandytowa, z o.o., akcyjna), 'NGO' = fundacja/stowarzyszenie. Other values (substring match): 'fundacja', 'stowarzyszenie', 'wspólnota mieszkaniowa', 'kościół', 'związek wyznaniowy'. Leave empty to show ALL accounts (recommended).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond the annotations: it specifies the geographic market (Poland), clarifies the return value ('account details'), and explains the default sorting behavior when no filters are applied. It aligns perfectly with readOnlyHint=true by using non-destructive verbs ('search', 'compare', 'returns').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently cover purpose, return value, and usage instructions without waste. The front-loaded first sentence establishes domain and scope immediately, while the third sentence provides actionable behavioral guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich schema coverage (100%) and clear annotations, the description provides adequate context about the return values ('account details') despite lacking an output schema. It appropriately delegates parameter specifics to the schema while focusing the description on high-level purpose and behavioral patterns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured documentation already comprehensively describes all 5 parameters (including detailed customerType aliases and bankName substring matching). The description appropriately avoids redundancy by not repeating this information, earning the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb+resource combination ('Search and compare business bank accounts'), geographic scope ('in Poland'), and target audience ('entrepreneurs and companies'). The parenthetical Polish term '(konta firmowe)' adds localization clarity that distinguishes this from sibling SearchPersonalAccounts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance for a common use case ('Call with no filters to get all business accounts sorted by score'), which helps the agent understand the default behavior. While it doesn't explicitly name alternatives like SearchPersonalAccounts, the domain specificity ('business', 'entrepreneurs') provides clear contextual boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

SearchDepositsSearch DepositsA
Read-onlyIdempotent
Inspect

Search and compare bank deposits (lokaty) in Poland. Returns interest rates by amount and period. Call with no filters to get all deposits sorted by interest rate. If no results for a specific period, try without the period filter or try adjacent periods.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return (1-50). Default 10.
periodNoFilter by deposit period (exact match, case-insensitive). Values: '1M', '3M', '6M', '12M', '18M', '24M', '36M', '48M', '60M'. Leave empty to show all periods. If a specific period returns no results, try without this filter.
sortByNoSort by: 'rate' (default, highest interest first), 'score' (overall ranking).rate
bankNameNoFilter by bank name (substring match, case-insensitive). Use GetBanks to see all available bank names. Leave empty to show all banks.
currencyNoFilter by currency (exact match, case-insensitive). Values: 'PLN', 'EUR', 'USD'. Leave empty to show all currencies.
minInterestRateNoMinimum interest rate in percent (e.g. 5.0 for 5%). Default 0 = no filter.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety, allowing the description to focus on adding behavioral specifics: it clarifies the Polish market context, reveals that results include amount ranges and periods, discloses the default sort order (by interest rate), and advises on handling empty result sets. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four efficient sentences with zero redundancy: sentence 1 establishes purpose/domain, sentence 2 describes return values, sentence 3 documents default behavior, and sentence 4 provides error-handling guidance. Information is front-loaded with the most critical context (Polish deposits) first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 6-parameter search tool with no output schema, the description adequately compensates by stating what data is returned (interest rates by amount and period) and explaining result set behavior. Full schema coverage and consistent annotations mean the description doesn't need to elaborate further on safety characteristics or parameter syntax.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema carries the full documentation burden for all six parameters. The description adds semantic value only for the 'period' parameter through specific fallback advice. Baseline score of 3 is appropriate when structured documentation is comprehensive.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description immediately states the specific action (search and compare), resource (bank deposits/lokaty), and geographic scope (Poland). The parenthetical 'lokaty' clarifies the local context, distinguishing it clearly from sibling tools like SearchSavingsAccounts or SearchPersonalAccounts that might serve other markets or product types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit operational guidance: 'Call with no filters to get all deposits sorted by interest rate' explains default behavior, and 'If no results for a specific period, try without the period filter or try adjacent periods' offers concrete fallback strategies. Lacks explicit differentiation from CalculateDepositInterest (computation vs. search) but provides strong implicit guidance through filter-specific advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

SearchPersonalAccountsSearch Personal AccountsA
Read-onlyIdempotent
Inspect

Search and compare personal bank accounts (konta osobiste) in Poland. Returns account details including fees, features, rewards, and scores. Call with no filters to get all accounts sorted by score.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return (1-50). Default 10.
sortByNoSort by: 'score' (default, overall ranking), 'reward' (highest bonus first), 'popularity' (most popular first).score
bankNameNoFilter by bank name (substring match, case-insensitive). Use GetBanks to see all available bank names. Leave empty to show all banks.
minScoreNoMinimum total score (0-100). Default 0 = no filter.
featureNameNoFilter by feature name (substring match). Common features: 'Google Pay', 'Apple Pay', 'Cashback', 'BLIK'. Leave empty to show all.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly and non-destructive status. Description adds valuable context: geographic scope (Poland), return value composition (fees, features, rewards, scores), and comparison capability. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose (sentence 1), return values (sentence 2), usage guidance (sentence 3). Front-loaded with clear action and scope. No redundancy with title or schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema present, but description adequately compensates by listing return fields (fees, features, rewards, scores). Combined with complete annotations and comprehensive input schema, the description provides sufficient context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so schema carries the full burden of parameter documentation. Description implies optional parameters ('Call with no filters') and default sorting, matching schema defaults, but adds no syntax or semantic details beyond what's in the property descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Search and compare') and resource ('personal bank accounts/konta osobiste in Poland'). Explicitly distinguishes from siblings SearchBusinessAccounts, SearchSavingsAccounts, and SearchDeposits by specifying 'personal' accounts and Polish market scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage pattern: 'Call with no filters to get all accounts sorted by score.' This guides the agent on default behavior. Lacks explicit cross-references to siblings (e.g., 'for business accounts use SearchBusinessAccounts'), but domain is clear from description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

SearchPromotionsSearch Bank PromotionsA
Read-onlyIdempotent
Inspect

Search time-limited bank promotions (promocje) in Poland with tasks, rewards, and deadlines. These are separate from permanent product offers (use Search*Accounts/SearchDeposits for those). Call with no filters to get all active promotions.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number (starts from 1). Default 1.
sortNoSort by: 'endTime' (default, ending soonest first), 'popularity' (most popular first), 'reward' (highest reward first).endTime
pageSizeNoResults per page (1-20). Default 8.
bankNamesNoFilter by bank names (comma-separated, substring match). Use GetBanks to see all available bank names. Example: 'mBank, ING'. Leave empty to show all banks.
productTypesNoFilter by product type codes (comma-separated). Values: 'personalAccount', 'savingsAccount', 'deposit', 'businessAccount', 'creditCard', 'personalVipAccount'. Leave empty to show all types.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, non-destructive, idempotent operation. Description adds valuable domain context: geographic scope (Poland), temporal nature (time-limited vs permanent), and content model (tasks/rewards/deadlines). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first defines entity and scope, second differentiates from siblings, third provides default behavior guidance. Front-loaded with essential differentiating information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% schema coverage, strong annotations, and five optional parameters, the description successfully covers domain semantics (Polish market), entity lifecycle (time-limited), and tool relationships. Minor gap: does not explicitly describe paginated return structure, though parameters imply this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning all parameters are fully documented in structured fields. Description implicitly supports this by mentioning 'Call with no filters,' aligning with zero required parameters, but adds no syntax details beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies exact action (Search), resource (time-limited bank promotions/promocje), scope (Poland), and key attributes (tasks, rewards, deadlines). Explicitly distinguishes from permanent product offers, clarifying relationship to sibling Search*Accounts/SearchDeposits tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit alternative tool guidance ('use Search*Accounts/SearchDeposits for those') for permanent offers. Includes concrete invocation advice ('Call with no filters to get all active promotions') clarifying the zero-parameter behavior.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

SearchSavingsAccountsSearch Savings AccountsA
Read-onlyIdempotent
Inspect

Search and compare savings accounts (konta oszczędnościowe) in Poland. Returns interest rates, fees, and conditions. Call with no filters to get all accounts sorted by interest rate.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return (1-50). Default 10.
sortByNoSort by: 'rate' (default, highest interest first), 'score' (overall ranking), 'popularity' (most popular first).rate
bankNameNoFilter by bank name (substring match, case-insensitive). Use GetBanks to see all available bank names. Leave empty to show all banks.
notConditionalNoOnly show unconditional offers (no activity requirements). Default false.
minInterestRateNoMinimum interest rate in percent (e.g. 5.0 for 5%). Default 0 = no filter.
noCheckingRequiredNoOnly show accounts that don't require a checking account (standalone savings). Default false.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable behavioral context beyond annotations: it discloses what data is returned ('interest rates, fees, and conditions'), specifies the domain (Poland), and clarifies default sorting behavior when no filters are applied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: (1) purpose and scope, (2) return values, (3) default invocation pattern. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description appropriately describes return values ('interest rates, fees, and conditions'). It covers the domain (Poland), localization, and default behavior. With 100% schema coverage and strong annotations, this is complete for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description mentions 'no filters' which aligns with the zero required parameters in the schema, but does not add syntax details, format examples, or parameter interaction rules beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Search and compare'), resource ('savings accounts'), and scope ('in Poland'). It includes the Polish localization '(konta oszczędnościowe)' which clearly distinguishes it from sibling tools like SearchDeposits or SearchBusinessAccounts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides concrete usage guidance ('Call with no filters to get all accounts sorted by interest rate'), indicating the default behavior when parameters are omitted. It lacks explicit comparison to siblings (e.g., when to use this vs SearchDeposits), but the clear context of 'savings accounts' vs other account types provides implicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources