Skip to main content
Glama

FDIC BankFind

Server Details

Bank financials, branch locations, deposit data, and failure history from FDIC

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
get_bank_failuresAInspect

Get recent bank failures from FDIC records.

Returns details of failed FDIC-insured institutions including failure
date, resolution type, estimated cost, and pre-failure assets.

Args:
    state: Two-letter state abbreviation to filter by (e.g. 'CA').
    start_year: Only return failures from this year onward (e.g. 2020).
ParametersJSON Schema
NameRequiredDescriptionDefault
stateNo
start_yearNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses output content (failure date, resolution type, cost, assets) which is helpful given the output schema exists, but lacks operational details like authentication needs, rate limits, or data update frequency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient three-part structure: purpose sentence, return value description, and Args documentation. Front-loaded with key information. 'Args:' format is slightly informal but clear and functional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a low-complexity tool (2 optional filters). Mentions key output fields since output schema exists, documents both parameters, and specifies the domain (FDIC-insured institutions).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. Description provides semantic meaning for both optional parameters including format guidance ('Two-letter state abbreviation') and concrete examples ('CA', 2020).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get') + resource ('bank failures') + source ('FDIC records'). Clearly distinguishes from siblings which deal with active institutions/branches rather than historical failure events.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied by the resource type (failures vs institutions/branches), but lacks explicit guidance on when to use this versus search_institutions or get_institution for historical data queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_county_branchesAInspect

Get all bank branches in a county ranked by deposits.

Returns branch-level deposit data from the Summary of Deposits (SOD)
for all institutions in the specified county.

Args:
    state_fips: Two-digit state FIPS code (e.g. '06' for California).
    county_fips: Three-digit county FIPS code (e.g. '037' for Los Angeles).
ParametersJSON Schema
NameRequiredDescriptionDefault
state_fipsYes
county_fipsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses that results are 'ranked by deposits' and specifies the data source ('Summary of Deposits'), providing important behavioral context about output ordering and provenance. It does not mention rate limits or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence earns its place: the first states the action and ranking, the second explains the data source and scope, and the Args section provides the critical parameter documentation. Well-structured with no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema (which per instructions relieves the description from detailing return values), the description adequately covers the tool's purpose, data source, and parameters. For a 2-parameter read tool, mentioning the SOD source and deposit ranking provides sufficient context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate fully. The Args section provides excellent semantic value: it explains the format ('Two-digit', 'Three-digit') and provides concrete examples ('06' for California, '037' for Los Angeles) that clarify the FIPS code format requirements beyond the raw schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific action ('Get'), resource ('bank branches'), scope ('in a county'), and ordering ('ranked by deposits'). It also identifies the data source ('Summary of Deposits'), clearly distinguishing it from sibling tools that fetch institutions or failures rather than county-level branch aggregates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies the scope (county-level, all institutions) which implicitly defines when to use it versus institution-specific tools, but does not explicitly name sibling alternatives like 'get_institution_branches' or provide explicit when/when-not guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_institutionAInspect

Get detailed financial information for a specific FDIC-insured institution.

Returns Call Report data for the most recent reporting period (2024-06-30)
including assets, deposits, income, classification, and regulator.

Args:
    cert: FDIC certificate number uniquely identifying the institution.
ParametersJSON Schema
NameRequiredDescriptionDefault
certYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It effectively specifies the data returned (Call Report data including assets, deposits, income, classification, regulator) and the temporal scope (most recent reporting period 2024-06-30), which provides crucial context for the agent. Does not explicitly state read-only nature, though implied by 'Get'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured three-section format: purpose declaration, return value specification, and parameter definition. Every sentence earns its place with no filler. Front-loaded with the core action. Minor deduction for unconventional 'Args:' formatting which is slightly less formal than integrated prose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter lookup tool with output schema present. The description wisely includes the specific reporting date (2024-06-30) and data source (Call Report) which are critical for financial data context. Could improve by noting if historical periods are available beyond the most recent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage (only 'title': 'Cert'), so the description must compensate. The Args section successfully defines 'cert' as the 'FDIC certificate number uniquely identifying the institution,' providing necessary semantic meaning. Deducts one point for not mentioning expected format constraints or where to obtain this ID.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') + specific resource ('detailed financial information'/'Call Report data') + scope ('FDIC-insured institution'). By specifying 'specific' institution and requiring a unique certificate number, it implicitly distinguishes from sibling search_institutions (filtered search) and branch-focused tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage by requiring a unique 'cert' identifier, suggesting the agent needs this value beforehand. However, lacks explicit guidance on when to use search_institutions first to obtain the cert number versus calling this tool directly, or when to use get_institution_branches instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_institution_branchesAInspect

Get all branch locations for an FDIC-insured institution.

Returns branch-level data from the Summary of Deposits (SOD) including
addresses, deposit amounts, branch type, and coordinates.

Args:
    cert: FDIC certificate number uniquely identifying the institution.
ParametersJSON Schema
NameRequiredDescriptionDefault
certYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses data source (Summary of Deposits), and specific returned fields (addresses, deposit amounts, coordinates), which is valuable behavioral context. Does not mention rate limits or error conditions, but 'Get' clearly signals read-only safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three distinct, front-loaded sections (purpose, return value details, parameter definition). Every sentence adds value—no repetition of tool name, no redundancy with schema structure. Appropriate length for tool complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a read-only retrieval tool. Compensates for poor input schema via Args documentation. Lists return fields (though output schema exists, this adds context about SOD data). Lacks only explicit sibling comparisons.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. The Args section fully compensates by explaining that 'cert' is an 'FDIC certificate number uniquely identifying the institution,' providing necessary semantic context beyond the bare integer type in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb+resource ('Get all branch locations') and scope ('for an FDIC-insured institution'). Distinguishes from get_institution (institution vs. branches) and implies institution-specific lookup, though it doesn't explicitly differentiate from get_county_branches (geographic vs. institutional search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through the Args documentation (requires FDIC certificate number), indicating use when you have a specific institution identifier. However, lacks explicit guidance on when to use this versus get_county_branches or how to obtain the certificate number.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_institutionsAInspect

Search FDIC-insured financial institutions by name.

Returns matching banks with key financial data from the most recent
Call Report (2024-06-30), sorted by total assets descending.

Args:
    search: Institution name or partial name to search for.
    limit: Maximum number of results to return (default 10, max 100).
ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
searchYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Strong disclosure of data provenance ('Call Report 2024-06-30'), sort behavior ('total assets descending'), and return content ('key financial data'). Since no annotations provided, description carries full burden; could improve by stating read-only nature or match semantics (contains vs exact).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences plus Args section. Every sentence earns its place: purpose statement, behavioral details (source/date/sorting), then parameter specs. No redundancy or fluff. Front-loaded with essential info.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given output schema exists, description appropriately focuses on tool behavior rather than return structure. Covers data freshness, sorting, and pagination (via limit). Minor gap: no mention of rate limiting or specific matching algorithm (fuzzy vs wildcard).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the Args section fully compensates by documenting both parameters: search supports partial names and limit has explicit bounds (default 10, max 100). Adds critical constraint info absent from schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Search') + resource ('FDIC-insured financial institutions') + scope ('by name'). However, it doesn't explicitly distinguish from sibling get_institution (likely for exact ID lookup), leaving ambiguity about when to search vs fetch directly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains implied usage via Args section noting 'partial name' support, suggesting fuzzy matching capability. However, lacks explicit when-not guidance or named alternatives (e.g., 'Use get_institution if you have the exact FDIC number').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources