Skip to main content
Glama

FDIC BankFind MCP Server

Server Details

Search FDIC institutions, branches, failures, and peer analysis over MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

23 tools
fdic_analyze_bank_healthAnalyze Bank Health (CAMELS-Style)A
Read-onlyIdempotent
Inspect

Produce a CAMELS-style analytical assessment for a single FDIC-insured institution using the public off-site proxy model.

Scores five components — Capital (C), Asset Quality (A), Earnings (E), Liquidity (L), Sensitivity (S) — using published FDIC financial data and derives a weighted composite rating (1=Strong to 5=Unsatisfactory), plus a proxy model overall band (1.0–4.0 scale).

Output includes:

  • Composite and component ratings with individual metric scores

  • Proxy model overall assessment band with capital classification

  • Management overlay assessment (inferred from public data patterns)

  • Trend analysis across prior quarters for key metrics

  • Risk signals flagging critical and warning-level concerns

  • Structured JSON for programmatic consumption (legacy + proxy fields)

NOTE: Management (M) is omitted from component scoring — cannot be assessed from public data. Sensitivity (S) uses proxy metrics (NIM trend, securities concentration). This is a public off-site analytical proxy, not an official CAMELS rating.

ParametersJSON Schema
NameRequiredDescriptionDefault
certYesFDIC Certificate Number of the institution to analyze.
repdteNoReport Date (YYYYMMDD). Defaults to the most recent quarter likely to have published data.
quartersNoNumber of prior quarters to fetch for trend analysis (default 8).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the annotations (readOnly, idempotent), the description adds significant methodological transparency: it discloses that Sensitivity (S) uses proxy metrics (NIM trend, securities concentration), explains the rating scale (1=Strong to 5=Unsatisfactory), and details the trend analysis behavior across prior quarters. This helps users interpret the analytical output correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately structured with the core purpose front-loaded, followed by a bulleted output specification and a clear NOTE section for limitations. While lengthy, the bullet points efficiently convey the complex output structure (six distinct data categories) without unnecessary prose, earning their place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's analytical complexity (five component scores, trend analysis, risk signals) and the absence of an output schema, the description comprehensively compensates by detailing exactly what the output includes: composite ratings, capital classification, trend analysis, risk signals, and JSON structure. This provides sufficient information for an agent to invoke the tool and handle results confidently.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all three parameters (cert, repdte, quarters). The description references 'single FDIC-insured institution' and 'prior quarters' which loosely map to parameters, but does not add semantic context, format examples, or business logic beyond what the schema already provides. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Produce[s] a CAMELS-style analytical assessment for a single FDIC-insured institution,' providing specific verb, resource, and scope. It clearly distinguishes this as a composite health analysis versus siblings that focus on specific domains like credit concentration or funding profile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides crucial contextual guidance by noting this is a 'public off-site analytical proxy, not an official CAMELS rating,' setting proper expectations for when to use it versus seeking official regulatory ratings. It explicitly states that Management (M) is omitted due to data limitations, helping users understand scope constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_analyze_credit_concentrationAnalyze Credit ConcentrationA
Read-onlyIdempotent
Inspect

Analyze loan portfolio composition and credit concentration risk for an FDIC-insured institution. Computes CRE concentration relative to capital (per 2006 interagency guidance), loan-type breakdown, and flags concentration risks.

Output includes:

  • Loan portfolio composition (CRE, C&I, consumer, residential, agricultural shares)

  • CRE and construction concentration relative to total capital

  • Loan-to-asset ratio

  • Concentration risk signals based on interagency guidance thresholds

  • Structured JSON for programmatic consumption

NOTE: This is an analytical tool based on public financial data.

ParametersJSON Schema
NameRequiredDescriptionDefault
certYesFDIC Certificate Number
repdteNoReport date (YYYYMMDD). Defaults to most recent quarter.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true and destructiveHint=false, the description adds valuable context: it notes the tool is 'based on public financial data,' specifies output format as 'Structured JSON for programmatic consumption,' and details the specific methodology (2006 interagency guidance thresholds). This provides important behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with the purpose upfront ('Analyze loan portfolio...'), followed by specific computations, a structured list of outputs, and a clarifying note. Every sentence earns its place; the detailed output list is necessary given the absence of an output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description comprehensively compensates by detailing the return structure (loan composition shares, CRE ratios, risk signals). It includes regulatory context (2006 guidance) appropriate for an analytical tool. Minor gap: no mention of error handling for invalid cert numbers, though openWorldHint implies external data handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (cert described as 'FDIC Certificate Number' and repdte as 'Report date'), the schema fully documents parameters. The description references 'FDIC-insured institution' which aligns with the cert parameter but doesn't add syntax details or usage examples beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Analyze[s] loan portfolio composition and credit concentration risk' with specific focus on 'CRE concentration relative to capital (per 2006 interagency guidance).' It clearly distinguishes from siblings like fdic_analyze_bank_health or fdic_analyze_funding_profile by specifying credit concentration and CRE metrics rather than general health or liquidity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (analyzing CRE concentration risk, loan portfolio composition) and references the specific regulatory framework (2006 interagency guidance). However, it lacks explicit 'when not to use' guidance or direct comparisons to sibling tools like fdic_analyze_bank_health.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_analyze_funding_profileAnalyze Funding ProfileA
Read-onlyIdempotent
Inspect

Analyze deposit composition, wholesale funding reliance, and funding risk for an FDIC-insured institution.

Output includes:

  • Deposit composition (core, brokered, foreign deposit shares)

  • Wholesale funding reliance and FHLB advances relative to assets

  • Cash ratio for near-term liquidity

  • Funding risk signals based on supervisory thresholds

  • Structured JSON for programmatic consumption

NOTE: This is an analytical tool based on public financial data.

ParametersJSON Schema
NameRequiredDescriptionDefault
certYesFDIC Certificate Number
repdteNoReport date (YYYYMMDD). Defaults to most recent quarter.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive behavior. The description adds valuable context by noting the data source ('public financial data') and detailing the output format ('Structured JSON'), but omits rate limits, caching behavior, or data freshness/latency details that would help an agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with the primary purpose stated immediately, followed by clear bullet points enumerating outputs, and ending with a data provenance note. Every sentence conveys essential information without redundancy, making efficient use of the agent's context window.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacking an output schema, the description effectively compensates by listing specific output components (deposit shares, FHLB advances, cash ratios, risk signals) and noting the JSON format. It adequately covers the tool's contract, though it could strengthen completeness by mentioning error conditions (e.g., invalid CERT numbers) or data availability limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input parameters (cert, repdte) are fully documented in the schema itself. The description does not add semantic clarification beyond the schema (e.g., how to find a CERT number or date range availability), meriting the baseline score for complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Analyze') and resources ('deposit composition', 'wholesale funding reliance', 'funding risk') to clearly define the tool's scope. It distinguishes itself from sibling analysis tools like fdic_analyze_bank_health by focusing specifically on funding structure and liquidity metrics rather than broad health indicators.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the detailed output list implies when to use this tool (when seeking funding composition and liquidity metrics), it lacks explicit guidance on when to prefer this over fdic_analyze_bank_health or fdic_detect_risk_signals. No prerequisites or exclusions are mentioned despite the domain requiring specific institutional identifiers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_analyze_securities_portfolioAnalyze Securities PortfolioA
Read-onlyIdempotent
Inspect

Analyze securities portfolio size, composition, and concentration risk for an FDIC-insured institution.

Output includes:

  • Securities relative to total assets and capital

  • MBS concentration within the securities portfolio

  • AFS/HTM breakdown (when available)

  • Risk signals for portfolio concentration and interest rate exposure

  • Structured JSON for programmatic consumption

NOTE: This is an analytical tool based on public financial data. AFS/HTM breakdown is not currently available from the FDIC API.

ParametersJSON Schema
NameRequiredDescriptionDefault
certYesFDIC Certificate Number
repdteNoReport date (YYYYMMDD). Defaults to most recent quarter.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnly/idempotent/destructive hints, the description adds valuable behavioral context: it clarifies the tool uses 'public financial data,' warns about specific data availability gaps (AFS/HTM breakdown), and confirms output format ('Structured JSON for programmatic consumption'). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structure is exemplary: single clear purpose statement, bulleted output specification (high signal-to-noise), and a concise NOTE qualifying data limitations. Every sentence serves a distinct purpose—defining scope, detailing outputs, or setting expectations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking a formal output schema, the description compensates effectively by enumerating specific output components (securities ratios, MBS concentration, risk signals) and data provenance. The limitation disclosure is critical for an analytical tool dependent on external API availability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (cert and repdte fully documented), the baseline score applies. The description adds no parameter-specific context (e.g., typical cert number format or date range constraints), but the schema carries the full semantic load effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific analytical purpose: analyzing 'securities portfolio size, composition, and concentration risk' for FDIC-insured institutions. It distinguishes itself from sibling tools like fdic_analyze_credit_concentration by specifying unique securities metrics (MBS concentration, AFS/HTM breakdown, interest rate exposure).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear contextual scope (FDIC-insured institution securities analysis) and transparently discloses data limitations (AFS/HTM 'not currently available'). However, it does not explicitly name sibling alternatives or state when NOT to use this tool versus fdic_analyze_bank_health or fdic_ubpr_analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_compare_bank_snapshotsCompare Bank Snapshot TrendsA
Read-onlyIdempotent
Inspect

Compare FDIC reporting snapshots across a set of institutions and rank the results by growth, profitability, or efficiency changes.

This tool is designed for heavier analytical prompts that would otherwise require many separate MCP calls. It batches institution roster lookup, financial snapshots, optional office-count snapshots, and can also fetch a quarterly time series inside the server.

Good uses:

  • Identify North Carolina banks with the strongest asset growth from 2021 to 2025

  • Compare whether deposit growth came with branch expansion or profitability improvement

  • Rank a specific cert list by ROA, ROE, asset-per-office, or deposit-to-asset changes

  • Pull a quarterly trend series and highlight inflection points, streaks, and structural shifts

Inputs:

  • state or certs: choose a geographic roster or provide a direct comparison set

  • start_repdte, end_repdte: Report Dates (REPDTE) in YYYYMMDD format — must be quarter-end dates (0331, 0630, 0930, 1231)

  • analysis_mode: snapshot or timeseries

  • institution_filters: optional extra institution filter when building the roster

  • active_only: default true

  • include_demographics: default true, adds office-count comparisons when available

  • sort_by: ranking field (default: asset_growth). All options: asset_growth, asset_growth_pct, dep_growth, dep_growth_pct, netinc_change, netinc_change_pct, roa_change, roe_change, offices_change, assets_per_office_change, deposits_per_office_change, deposits_to_assets_change

  • sort_order: ASC or DESC

  • limit: maximum ranked results to return

Returns concise comparison text plus structured deltas, derived metrics, and insight tags for each institution.

ParametersJSON Schema
NameRequiredDescriptionDefault
certsNoOptional list of FDIC certificate numbers to compare directly. Max 100.
limitNoMaximum number of ranked comparisons to return.
stateNoState name for the institution roster filter. Example: "North Carolina"
sort_byNoComparison field used to rank institutions. Valid options: asset_growth, asset_growth_pct, dep_growth, dep_growth_pct, netinc_change, netinc_change_pct, roa_change, roe_change, offices_change, assets_per_office_change, deposits_per_office_change, deposits_to_assets_change.asset_growth
end_repdteNoEnding Report Date (REPDTE) in YYYYMMDD format. Must be a quarter-end date: March 31 (0331), June 30 (0630), September 30 (0930), or December 31 (1231). Must be later than start_repdte. Example: 20251231 for Q4 2025. If omitted, defaults to the most recent quarter-end date with published data (~90-day lag).
sort_orderNoSort direction for the ranked comparisons.DESC
active_onlyNoLimit the comparison set to currently active institutions.
start_repdteNoStarting Report Date (REPDTE) in YYYYMMDD format. Must be a quarter-end date: March 31 (0331), June 30 (0630), September 30 (0930), or December 31 (1231). Example: 20210331 for Q1 2021. If omitted, defaults to the same quarter one year before end_repdte.
analysis_modeNoUse snapshot for two-point comparison or timeseries for quarterly trend analysis across the date range.snapshot
institution_filtersNoAdditional institution-level filter used when building the comparison set. Example: BKCLASS:N or CITY:"Charlotte"
include_demographicsNoInclude office-count changes from the demographics dataset when available.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint, idempotentHint, and openWorldHint. The description adds valuable behavioral context beyond these: it discloses the internal batching behavior ('batches institution roster lookup... inside the server'), explains the ~90-day data lag (in schema defaults), and describes the return format ('concise comparison text plus structured deltas, derived metrics, and insight tags').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose statement, design rationale, bulleted use cases, and grouped input explanations. Despite duplicating some schema information, the 'Inputs' section earns its place by presenting the 11 parameters in a logical narrative flow that helps LLMs map user intent to the correct arguments.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's high complexity (11 parameters, two analysis modes, optional time series), no required parameters, and absence of an output schema, the description adequately covers invocation patterns. It explains return data types sufficiently for an agent to parse results, though it could further detail the structure of 'derived metrics' or 'insight tags' for a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds significant value through the 'Inputs' section, which logically groups parameters (state/certs mutual exclusivity, date range constraints), explains the quarter-end date requirement, and translates analysis_mode semantics ('snapshot for two-point comparison or timeseries for quarterly trend analysis').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific action (compare) and resource (FDIC reporting snapshots), explicitly mentioning the ranking dimensions (growth, profitability, efficiency). The 'Good uses' examples and mention of 'institutions' clearly distinguish it from sibling tools like fdic_compare_peer_health (which aggregates peer groups) and fdic_analyze_bank_health (single institution focus).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides excellent positive guidance through four concrete 'Good uses' examples and states it is designed for 'heavier analytical prompts that would otherwise require many separate MCP calls.' However, it lacks explicit negative guidance (when *not* to use) or named sibling alternatives for simpler queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_compare_peer_healthCompare Peer Health (CAMELS Rankings)A
Read-onlyIdempotent
Inspect

Compare CAMELS-style health scores across a group of FDIC-insured institutions.

Three usage modes:

  • Explicit list: provide certs (up to 50) for a specific comparison set

  • State-wide scan: provide state to compare all active institutions in that state

  • Asset-based: provide asset_min/asset_max to compare institutions by size

Optionally provide cert to highlight a subject institution's position in the ranking.

Output: Ranked list with per-institution proxy_score (1-4 scale) and proxy_band, sorted by composite or any individual component. When a subject cert is provided, includes peer percentile context, asset-weighted peer averages, and the subject's full proxy assessment. Auto-peer selection derives asset bands from report-date financials and broadens the cohort if fewer than 10 peers match.

NOTE: Public off-site analytical proxy — not official supervisory ratings.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoSubject institution CERT to highlight in the ranking. Optional.
certsNoExplicit list of CERTs to compare (max 50).
limitNoMax institutions to return in the response.
stateNoTwo-letter state code to select all active institutions (e.g., "WY").
repdteNoReport Date (YYYYMMDD). Defaults to the most recent quarter.
sort_byNoSort results by composite or a specific CAMELS component rating.composite
asset_maxNoMaximum total assets ($thousands) for peer selection.
asset_minNoMinimum total assets ($thousands) for peer selection.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover the read-only/idempotent safety profile, but the description adds crucial behavioral context: the important disclaimer that this is a 'Public off-site analytical proxy — not official supervisory ratings,' details on auto-peer selection logic (asset band derivation, cohort broadening), and comprehensive output format explanation (proxy_score scale, sorting options) that compensates for missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with clear information hierarchy: purpose statement, bulleted usage modes, output specification, and disclaimer note. Every sentence conveys essential information without redundancy. The three usage modes are efficiently enumerated and the technical details (proxy_score 1-4 scale, sorting components) are precisely specified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage appropriate for an 8-parameter tool with complex query patterns. The description thoroughly explains the three input modes, details the ranking output format (compensating for lack of output schema), clarifies the subject institution highlighting behavior, and includes the critical regulatory disclaimer about proxy ratings. No significant gaps given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema carries the parameter documentation burden effectively. The description adds value by explaining how parameter groups interact (the three mutually exclusive modes: certs vs state vs asset range) and the optional cert highlighting behavior, but does not significantly augment individual parameter semantics beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool compares 'CAMELS-style health scores across a group of FDIC-insured institutions' with specific scope on peer rankings. It effectively distinguishes from siblings like fdic_analyze_bank_health (single institution) by emphasizing peer comparison, cohort selection, and ranking functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly documents three distinct usage modes (explicit cert list, state-wide scan, asset-based selection) and explains the optional 'subject institution' highlighting pattern. However, it does not explicitly name sibling alternatives (e.g., fdic_peer_group_analysis) or state when to avoid this tool in favor of single-institution analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_detect_risk_signalsDetect Risk Signals (Early Warning)A
Read-onlyIdempotent
Inspect

Scan FDIC-insured institutions for early warning risk signals using the public_camels_proxy_v1 analytical engine.

Standardized signal codes with severity levels:

  • Critical: capital_undercapitalized (PCA breach), earnings_loss (ROA < 0), reserve_coverage_low (< 50%)

  • Warning: capital_buffer_erosion, credit_deterioration, credit_deterioration_trending, earnings_pressure, margin_compression, funding_stress, funding_ltd_stretched, rate_risk_proxy_elevated, wholesale_funding_elevated

  • Info: merger_distorted_trend, stale_reporting_period

Three scan modes:

  • State-wide: provide state to scan all active institutions

  • Explicit list: provide certs (up to 50)

  • Asset-based: provide asset_min/asset_max

Output: Per-institution risk signals ranked by severity count. The proxy engine drives signal generation internally; the output is signal-shaped, not assessment-shaped.

NOTE: Public off-site analytical proxy — not official supervisory ratings.

ParametersJSON Schema
NameRequiredDescriptionDefault
certsNoSpecific CERTs to scan (max 50).
limitNoMax flagged institutions to return.
stateNoScan all active institutions in this state.
repdteNoReport Date (YYYYMMDD). Defaults to the most recent quarter.
quartersNoPrior quarters to fetch for trend analysis (default 4).
asset_maxNoMaximum total assets ($thousands) filter.
asset_minNoMinimum total assets ($thousands) filter.
min_severityNoMinimum severity level to include in results (default: warning).warning
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnly/idempotent), the description adds crucial context: detailed signal codes with severity classifications (Critical/Warning/Info), clarification that output is 'signal-shaped, not assessment-shaped,' and the important disclaimer that this is a 'Public off-site analytical proxy' rather than official supervisory data. This significantly enhances understanding of the tool's behavior and limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with logical flow: purpose → signal taxonomy → scan modes → output explanation → disclaimer. While the enumerated signal codes are lengthy, they are essential reference material. The content earns its place with zero filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately explains the return format ('Per-institution risk signals ranked by severity count') and the ranking mechanism. It covers the analytical methodology and limitations. A perfect score would require specific output field documentation, but this is comprehensive for an undocumented output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by explaining the three scan modes (state vs certs vs asset filters), which clarifies how parameters interact and are mutually exclusive. It also adds context about the 'max 50' constraint for certs and the default report date behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool scans FDIC-insured institutions for early warning risk signals using the specific 'public_camels_proxy_v1' analytical engine. It distinguishes itself from sibling tools by specifying the 'early warning' use case and proxy-based methodology versus comprehensive health analysis or raw data searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly outlines three scan modes (state-wide, explicit list, asset-based) which guide parameter selection. However, it lacks explicit guidance on when to use this versus siblings like fdic_analyze_bank_health, though the disclaimer about 'not official supervisory ratings' provides implicit context about limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_franchise_footprintInstitution Franchise FootprintA
Read-onlyIdempotent
Inspect

Analyze the geographic franchise footprint of an FDIC-insured institution using Summary of Deposits (SOD) data.

Shows how an institution's branches and deposits are distributed across metropolitan statistical areas (MSAs), providing a market-by-market breakdown of branch count, deposit totals, and percentage of the institution's total deposits.

Output includes:

  • Total branch count, deposits, and market count

  • Market-by-market breakdown sorted by deposits

  • Structured JSON for programmatic consumption

Branches outside MSAs are grouped under "Non-MSA / Rural".

ParametersJSON Schema
NameRequiredDescriptionDefault
certYesFDIC Certificate Number
yearNoSOD report year. Defaults to most recent.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety, while the description adds crucial behavioral context: data source (SOD), aggregation logic (grouped by MSA, sorted by deposits), edge case handling (Non-MSA/Rural grouping), and output structure (JSON with specific metrics).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded, followed by methodology, output specification (bullet points compensate for missing output schema), and edge case note. Slightly verbose but efficient given the need to document return values without an output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully compensates for absent output schema by detailing return values (branch count, deposit totals, percentages, market count) and format. Combined with complete input schema and annotations, provides sufficient context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both cert and year fully described), establishing baseline 3. Description mentions 'SOD data' which reinforces the year parameter's context, but does not elaborate on parameter semantics, formats, or relationships beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states the tool 'Analyzes the geographic franchise footprint' using 'Summary of Deposits (SOD) data,' specifying the exact resource (branch/deposit distribution across MSAs) and distinguishing it from generic institution searches or financial analysis siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context that this is for geographic/market breakdown analysis, but lacks explicit guidance on when to use this versus siblings like fdic_search_locations or fdic_market_share_analysis, and includes no 'when-not-to-use' exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_get_institutionGet Institution by Certificate NumberA
Read-onlyIdempotent
Inspect

Retrieve detailed information for a specific FDIC-insured institution using its FDIC Certificate Number (CERT).

Use this when you know the exact CERT number for an institution. To find a CERT number, use fdic_search_institutions first.

Args:

  • cert (number): FDIC Certificate Number (e.g., 3511 for Bank of America)

  • fields (string, optional): Comma-separated list of fields to return

Returns a detailed institution profile suitable for concise summaries, with structured fields available for exact values when needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
certYesFDIC Certificate Number — the unique identifier for an institution
fieldsNoComma-separated list of fields to return
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive traits, so description focuses on output characteristics: 'detailed institution profile suitable for concise summaries'. No contradictions. Could mention idempotence explicitly for redundancy, but adequately supplements annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with logical flow: purpose statement → usage guidance → parameter documentation → return description. No filler sentences. Front-loaded with prerequisite conditions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter lookup tool with no output schema, description adequately covers return value intent ('suitable for concise summaries'), prerequisites (CERT lookup), and parameter examples. Matches complexity level appropriately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% providing baseline documentation. Description adds concrete example '3511 for Bank of America' for the cert parameter, helping agents understand the data format. The Args section provides clear parameter mapping.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Retrieve' + specific resource 'FDIC-insured institution' + specific identifier 'FDIC Certificate Number'. It clearly distinguishes from sibling fdic_search_institutions by stating this requires the 'exact CERT number', making the scope precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this when you know the exact CERT number' and names specific alternative 'use fdic_search_institutions first'. Clear prerequisite chain guides agent on tool selection workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_get_institution_failureGet Failure Details by Certificate NumberA
Read-onlyIdempotent
Inspect

Retrieve failure details for a specific institution by FDIC Certificate Number.

Use this when you know the CERT of a failed institution to get its specific failure record.

Args:

  • cert (number): FDIC Certificate Number of the failed institution

  • fields (string, optional): Comma-separated list of fields to return

Returns detailed failure information suitable for concise summaries, with structured fields available for exact values when needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
certYesFDIC Certificate Number — the unique identifier for an institution
fieldsNoComma-separated list of fields to return
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations disclose read-only, idempotent, non-destructive safety properties. The description adds value by characterizing the return data ('suitable for concise summaries, with structured fields available'), compensating for the missing output schema. It omits rate limits, data freshness, or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-organized with purpose front-loaded, followed by usage guidance, parameters, and return value description. The Args section duplicates information already present in the schema (minor redundancy), but overall every sentence earns its place and the structure aids scannability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 2-parameter schema with full coverage and provided annotations, the description adequately covers the tool's function and output character. It appropriately handles the lack of output schema by describing the return value. Could be improved by noting this covers historical bank failures only.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The Args section in the description essentially mirrors the schema definitions without adding supplemental semantics (e.g., CERT format examples, available field options), which is acceptable given the comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise action ('Retrieve') + resource ('failure details') + identifier ('by FDIC Certificate Number'), clearly distinguishing this getter from the sibling 'fdic_search_failures' tool. The scope is unambiguous and specific to failed institutions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this when you know the CERT of a failed institution,' providing clear context for when to select this tool over the search alternative. However, it stops short of explicitly naming the sibling tool (fdic_search_failures) that should be used when the CERT is unknown.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_holding_company_profileHolding Company ProfileA
Read-onlyIdempotent
Inspect

Profile a bank holding company by grouping its FDIC-insured subsidiaries and aggregating financial metrics. Look up by holding company name or by any subsidiary's CERT number.

Output includes:

  • Consolidated summary with total assets, deposits, and asset-weighted ROA/equity ratio

  • List of all FDIC-insured subsidiaries with individual metrics

  • Structured JSON for programmatic consumption

NOTE: This is an analytical tool based on public financial data.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoCERT of any subsidiary — looks up its holding company, then profiles the entire HC.
hc_nameNoHolding company name (e.g., "JPMORGAN CHASE & CO"). Uses NAMEHCR field.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive behavior. The description adds valuable context by confirming this is an 'analytical tool based on public financial data' and detailing the specific aggregation methodology ('asset-weighted ROA/equity ratio') and output structure, which goes beyond the safety hints in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is impeccably structured: a single sentence establishing purpose, followed by focused bullet points detailing the specific output metrics, and a brief note on data provenance. Every sentence earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking a formal output schema, the description compensates effectively by listing the specific aggregated metrics (total assets, deposits, asset-weighted ratios) and subsidiary list format. Combined with comprehensive input schema coverage and complete behavioral annotations, this provides sufficient context for invocation, though it omits error handling or rate limit details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters (including the NAMEHCR field reference and the lookup logic for cert). The description reinforces the lookup options but adds minimal semantic depth beyond what the schema provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific action ('Profile') and resource ('bank holding company'), clearly distinguishing it from single-institution tools like fdic_get_institution by emphasizing the aggregation of 'FDIC-insured subsidiaries' and 'financial metrics.' The scope is precisely defined through the consolidation concept.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the two lookup methods (by 'holding company name or by any subsidiary's CERT number'), providing clear parameter-level guidance. However, it lacks explicit guidance on when to select this tool over sibling analysis tools (e.g., fdic_analyze_bank_health) or when a user needs consolidated vs. individual bank views.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_market_share_analysisDeposit Market Share AnalysisA
Read-onlyIdempotent
Inspect

Analyze deposit market share and concentration for an MSA or city market using FDIC Summary of Deposits (SOD) data.

Computes market share for all institutions in a geographic market, ranks them by deposits, and calculates the Herfindahl-Hirschman Index (HHI) for market concentration analysis per DOJ/FTC merger guidelines.

Two entry modes:

  • MSA market: provide msa as the numeric MSABR code (e.g., msa: 19100 for Dallas-Fort Worth-Arlington, msa: 42660 for Seattle-Tacoma-Bellevue). Use fdic_search_sod to look up MSABR codes.

  • City market: provide city (branch city name, e.g., "Austin") and state (two-letter code, e.g., "TX").

Output includes:

  • Market overview with total deposits, institution count, and HHI classification

  • Optional highlighted institution showing rank and share (provide cert)

  • Top institutions ranked by deposit market share

  • Structured JSON for programmatic consumption

Requires at least one of: msa (numeric MSABR code), or city + state.

ParametersJSON Schema
NameRequiredDescriptionDefault
msaNoFDIC MSABR numeric code for the Metropolitan Statistical Area (e.g., 19100 for Dallas-Fort Worth-Arlington, 42660 for Seattle-Tacoma-Bellevue). Use fdic_search_sod with MSABR to look up codes.
certNoHighlight a specific institution in the results.
cityNoCity name (e.g., "Austin"). Requires state.
yearNoSOD report year (1994-present). Defaults to most recent.
stateNoTwo-letter state abbreviation (e.g., TX). Required when using city filter.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). The description adds valuable behavioral context: data source (FDIC Summary of Deposits), regulatory methodology (DOJ/FTC merger guidelines for HHI), and detailed output structure. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear hierarchy: purpose statement, methodology note, entry modes (bulleted), output specification (bulleted), and requirements. Every sentence provides actionable information. Appropriate length for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive despite no output schema. Description compensates with detailed 'Output includes' section covering market overview, HHI classification, institution ranking, and JSON format. All 5 parameters are contextualized, and the conditional requirement logic (MSA vs City+State) is fully explained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema has 100% coverage (baseline 3), the description adds significant value: concrete examples for MSA codes (19100, 42660), cross-reference to fdic_search_sod for lookups, and explanation of the 'cert' parameter's effect (highlighting specific institutions). Elevates above baseline schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes 'deposit market share and concentration' using FDIC SOD data, with specific computation details (market share ranking, HHI calculation). It distinctly positions this as a geographic market analysis tool, differentiating it from sibling tools focused on individual bank health or credit analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance provided through explicit 'Two entry modes' section explaining MSA vs. City approaches. Critically, it directs users to use sibling tool 'fdic_search_sod' for MSABR code lookups and clearly states the requirement logic ('Requires at least one of: msa... or city + state').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_peer_group_analysisPeer Group AnalysisA
Read-onlyIdempotent
Inspect

Build a peer group for an FDIC-insured institution and rank it against peers on financial and efficiency metrics at a single report date.

Three usage modes:

  • Subject-driven: provide cert and repdte — auto-derives peer criteria from the subject's asset size and charter class

  • Explicit criteria: provide repdte plus asset_min/asset_max, charter_classes, state, or raw_filter

  • Subject with overrides: provide cert plus explicit criteria to override auto-derived defaults

Metrics ranked (fixed order):

  • Total Assets, Total Deposits, ROA, ROE, Net Interest Margin

  • Equity Capital Ratio, Efficiency Ratio, Loan-to-Deposit Ratio

  • Deposits-to-Assets Ratio, Non-Interest Income Share

Rankings use competition rank (1, 2, 2, 4). Rank, denominator, and percentile all use the same comparison set: matched peers plus the subject institution.

Output includes:

  • Subject rankings and percentiles (when cert provided)

  • Peer group medians

  • Peer list with CERTs (pass to fdic_compare_bank_snapshots for trend analysis)

  • Metric definitions with directionality metadata

Override precedence: cert derives defaults, then explicit params override them.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoSubject institution CERT number. When provided, auto-derives peer criteria and ranks this bank against peers.
limitNoMax peer records returned in the response. All matched peers are used for ranking regardless of this limit.
stateNoTwo-letter state code (e.g., "NC", "TX").
repdteNoReport Date (REPDTE) in YYYYMMDD format. FDIC data is published quarterly on: March 31, June 30, September 30, and December 31. Example: 20231231 for Q4 2023. If omitted, defaults to the most recent quarter-end date likely to have published data (~90-day lag).
asset_maxNoMaximum total assets ($thousands) for peer selection. Defaults to 200% of subject's report-date assets when cert is provided.
asset_minNoMinimum total assets ($thousands) for peer selection. Defaults to 50% of subject's report-date assets when cert is provided.
raw_filterNoAdvanced: raw ElasticSearch query string appended to peer selection criteria with AND.
active_onlyNoLimit to institutions where ACTIVE:1 (currently operating, FDIC-insured).
extra_fieldsNoAdditional FDIC field names to include as raw values in the response. Does not affect peer selection.
charter_classesNoCharter class codes to include (e.g., ["N", "SM"]). Defaults to the subject's charter class when cert is provided.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the readOnly/idempotent annotations, the description adds crucial behavioral context: the specific ranking methodology ('competition rank (1, 2, 2, 4)'), the comparison set definition ('matched peers plus the subject'), parameter precedence logic ('cert derives defaults, then explicit params override'), and detailed output structure. This explains what the tool does internally and what users receive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear information hierarchy: purpose statement, usage modes (bulleted), ranked metrics (list), output specification, and precedence rules. Every section serves a distinct purpose. Slightly dense but efficient given the tool's 10-parameter complexity and three-modal logic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a complex analytical tool with no output schema. Documents the 9 specific ranked metrics, output components (percentiles, medians, peer list), and references the sibling tool for extended workflows. Missing only error handling or edge case descriptions (e.g., insufficient peers).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant value by grouping parameters into logical usage modes and explaining dynamic defaults (e.g., asset_min/asset_max default to percentages of subject assets when cert is provided) and override precedence—semantic relationships not captured in individual parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence clearly defines the core action ('Build a peer group... and rank it') and scope ('financial and efficiency metrics at a single report date'). It specifically targets FDIC-insured institutions and distinguishes this from sibling tools by focusing on peer derivation and ranking rather than individual bank analysis or time-series comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly documents three distinct usage modes (Subject-driven, Explicit criteria, Subject with overrides) with clear parameter combinations for each. Critically, it identifies the sibling tool fdic_compare_bank_snapshots as the next step for 'trend analysis,' providing clear workflow guidance and alternative selection logic.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_regional_contextRegional Economic ContextA
Read-onlyIdempotent
Inspect

Overlay macro/regional economic data on a bank's geographic context. Uses FRED (Federal Reserve Economic Data) for state unemployment, national unemployment, and federal funds rate. Provides trend analysis and narrative context for bank performance assessment. Gracefully degrades if FRED API is unavailable.

Output includes:

  • State and national unemployment rates with trend analysis

  • Federal funds rate and rate environment classification

  • Narrative assessment of macro conditions for bank performance

  • Structured JSON for programmatic consumption

NOTE: Requires FRED_API_KEY environment variable for reliable data access. Degrades gracefully without it.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoFDIC Certificate Number — auto-detects state from institution record.
stateNoTwo-letter state abbreviation (e.g., TX). Alternative to cert-based lookup.
repdteNoReference report date (YYYYMMDD). FRED data fetched for 2 years before this date.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses critical behavioral traits beyond annotations: external FRED API dependency, graceful degradation behavior, specific metrics retrieved (state/national unemployment, fed funds rate), and detailed output structure. Deducted one point for not specifying degradation mechanics (e.g., partial data vs. empty response) or cache behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded, followed by data sources, output specification, and prerequisites. The bulleted output list is justified given the absence of an output schema. Slightly verbose but information-dense with no wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent completeness given constraints. Compensates effectively for missing output schema by detailing return structure (JSON fields, narrative assessments). Covers external API prerequisites, degradation behavior, and data provenance. Sufficient for an AI agent to invoke confidently despite external dependencies.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. The description does not augment parameter semantics (e.g., explaining cert/state mutual exclusivity or repdte lookup window semantics), but the schema descriptions are complete enough that no additional compensation is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Overlay') and resources ('macro/regional economic data', 'FRED') to clearly define the tool's scope. It effectively distinguishes itself from siblings by focusing on external macroeconomic indicators (unemployment, federal funds rate) rather than institution-specific micro-analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for usage ('for bank performance assessment') and notes the FRED_API_KEY requirement. Lacks explicit contrast with geographic siblings like 'fdic_franchise_footprint', but the FRED data focus and 'Overlay' verb sufficiently clarify when to select this tool for macro context vs. operational footprint analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_search_demographicsSearch Institution Demographics DataA
Read-onlyIdempotent
Inspect

Search BankFind demographics data for FDIC-insured institutions.

Returns quarterly demographic and market-structure attributes such as office counts, territory assignments, metro classification, county/country codes, and selected geographic reference data.

Common filter examples:

  • Demographics for a specific bank: CERT:3511

  • By report date: REPDTE:20251231

  • Institutions in metro areas: METRO:1

  • Institutions with out-of-state offices: OFFSTATE:[1 TO *]

  • Minority status date present: MNRTYDTE:[19000101 TO 99991231]

Key returned fields:

  • CERT: FDIC Certificate Number

  • REPDTE: Report Date — the last day of the quarterly reporting period (YYYYMMDD)

  • QTRNO: Quarter number

  • OFFTOT: Total offices

  • OFFSTATE: Offices in other states

  • OFFNDOM: Offices in non-domestic territories

  • OFFOTH: Other offices

  • OFFSOD: Offices included in Summary of Deposits

  • METRO, MICRO: Metro/micro area flags

  • CBSANAME, CSA: Core-based statistical area data

  • FDICTERR, RISKTERR: FDIC and risk territory assignments

  • SIMS_LAT, SIMS_LONG: Geographic coordinates

Args:

  • cert (number, optional): Filter by institution CERT number

  • repdte (string, optional): Report Date in YYYYMMDD format (quarter-end dates: 0331, 0630, 0930, 1231)

  • filters (string, optional): Additional ElasticSearch query filters

  • fields (string, optional): Comma-separated field names

  • limit (number): Records to return (default: 20)

  • offset (number): Pagination offset (default: 0)

  • sort_by (string, optional): Field to sort by

  • sort_order ('ASC'|'DESC'): Sort direction (default: 'ASC')

Prefer concise human-readable summaries or tables when answering users. Structured fields are available for totals, pagination, and demographic records.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoFilter by FDIC Certificate Number
limitNoMaximum number of records to return (1-10000, default: 20)
fieldsNoComma-separated list of FDIC field names to return. Leave empty to return all fields. Field names are ALL_CAPS (e.g., NAME, CERT, ASSET, DEP, STALP). Example: NAME,CERT,ASSET,DEP,STALP
offsetNoNumber of records to skip for pagination (default: 0)
repdteNoFilter by Report Date (REPDTE) in YYYYMMDD format. FDIC data is published quarterly on: March 31, June 30, September 30, and December 31. Example: 20251231 for Q4 2025. If omitted, returns all available dates.
filtersNoFDIC API filter using ElasticSearch query string syntax. Combine conditions with AND/OR, use quotes for multi-word values, and [min TO max] for ranges (* = unbounded). Common fields: NAME (institution name), STNAME (state name), STALP (two-letter state code), CERT (certificate number), ASSET (total assets in $thousands), ACTIVE (1=active, 0=inactive). Examples: STNAME:"California", ACTIVE:1 AND ASSET:[1000000 TO *], NAME:"Chase"
sort_byNoField name to sort results by. Example: ASSET, NAME, FAILDATE
sort_orderNoSort direction: ASC (ascending) or DESC (descending)ASC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only/idempotent safety (readOnlyHint=true), so description focus shifts to data content. It effectively discloses behavioral traits by detailing 'quarterly demographic and market-structure attributes' and listing 15+ specific returned fields (CERT, OFFTOT, CBSANAME, etc.), providing crucial context about what data structures to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with logical sections: purpose statement, filter examples, returned fields documentation, and Args summary. Despite length (8 parameters, no output schema), every section earns its place. The 'Key returned fields' section efficiently compensates for missing output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description comprehensively documents return values by listing and explaining key fields (CERT, REPDTE, OFFTOT, SIMS_LAT, etc.). With 8 optional parameters requiring complex ElasticSearch syntax, the common filter examples provide essential completeness for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100%, the description adds valuable semantic context: it maps quarter-end dates to numeric formats (0331, 0630, 0930, 1231) for repdte, provides concrete ElasticSearch filter syntax examples (e.g., 'MNRTYDTE:[19000101 TO 99991231]'), and clarifies field name conventions (ALL_CAPS), enhancing parameter understanding beyond schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the specific action ('Search'), resource ('BankFind demographics data'), and target ('FDIC-insured institutions'). The term 'demographics' effectively distinguishes this tool from siblings like fdic_search_financials and fdic_search_locations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides 'Common filter examples' section with concrete use cases (e.g., 'CERT:3511', 'METRO:1', 'OFFSTATE:[1 TO *]') that demonstrate how to filter effectively. However, lacks explicit guidance on when NOT to use this tool versus sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_search_failuresSearch Bank FailuresA
Read-onlyIdempotent
Inspect

Search for details on failed FDIC-insured financial institutions.

Returns data on bank failures including failure date, resolution type, estimated cost to the FDIC Deposit Insurance Fund, and acquiring institution info.

Common filter examples:

  • By state: STALP:CA (two-letter state code)

  • By year range: FAILDATE:[2008-01-01 TO 2010-12-31]

  • Recent failures: FAILDATE:[2020-01-01 TO *]

  • By resolution type: RESTYPE:PAYOFF or RESTYPE:"PURCHASE AND ASSUMPTION"

  • Large failures by cost: COST:[100000 TO *] (cost in $thousands)

  • By name: NAME:"Washington Mutual"

Resolution types (RESTYPE): PAYOFF = depositors paid directly, no acquirer PURCHASE AND ASSUMPTION = acquirer buys assets and assumes deposits PAYOUT = variant of payoff with insured-deposit transfer

Key returned fields:

  • CERT: FDIC Certificate Number

  • NAME: Institution name

  • CITY, STALP (two-letter state code), STNAME (full state name): Location

  • FAILDATE: Date of failure (YYYY-MM-DD)

  • SAVR: Savings association flag (SA) or bank (BK)

  • RESTYPE: Resolution type (see above)

  • QBFASSET: Total assets at failure ($thousands)

  • COST: Estimated cost to FDIC Deposit Insurance Fund ($thousands)

Args:

  • filters (string, optional): ElasticSearch query filter

  • fields (string, optional): Comma-separated field names

  • limit (number): Records to return (default: 20)

  • offset (number): Pagination offset (default: 0)

  • sort_by (string, optional): Field to sort by (e.g., FAILDATE, COST)

  • sort_order ('ASC'|'DESC'): Sort direction (default: 'ASC')

Prefer concise human-readable summaries or tables when answering users. Structured fields are available for totals, pagination, and failure records.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of records to return (1-10000, default: 20)
fieldsNoComma-separated list of FDIC field names to return. Leave empty to return all fields. Field names are ALL_CAPS (e.g., NAME, CERT, ASSET, DEP, STALP). Example: NAME,CERT,ASSET,DEP,STALP
offsetNoNumber of records to skip for pagination (default: 0)
filtersNoFDIC API filter using ElasticSearch query string syntax. Combine conditions with AND/OR, use quotes for multi-word values, and [min TO max] for ranges (* = unbounded). Common fields: NAME (institution name), STNAME (state name), STALP (two-letter state code), CERT (certificate number), ASSET (total assets in $thousands), ACTIVE (1=active, 0=inactive). Examples: STNAME:"California", ACTIVE:1 AND ASSET:[1000000 TO *], NAME:"Chase"
sort_byNoField name to sort results by. Example: ASSET, NAME, FAILDATE
sort_orderNoSort direction: ASC (ascending) or DESC (descending)ASC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint/idempotentHint, the description adds crucial behavioral context: it discloses the query syntax (ElasticSearch), explains domain-specific codes (RESTYPE values like PAYOFF vs PURCHASE AND ASSUMPTION), clarifies monetary units ($thousands), and documents pagination behavior implicitly through offset/limit explanations. It does not contradict the safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is lengthy but well-structured and front-loaded: it states purpose immediately, then provides tiered sections (examples, field definitions, args). Every section serves a purpose—particularly the resolution type glossary and filter examples which are essential for this ElasticSearch-based API. No tautology or obvious padding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent compensation for missing output schema: the description comprehensively lists key returned fields (CERT, NAME, FAILDATE, QBFASSET, etc.), explains their semantics (e.g., SAVR flag, cost units), and documents the data structure (totals, pagination, failure records). Combined with high schema coverage and clear annotations, this is complete for a complex search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant value through concrete filter syntax examples (e.g., 'STALP:CA', 'COST:[100000 TO *]'), explains field naming conventions (ALL_CAPS), and provides domain context for parameters like RESTYPE and FAILDATE that pure schema descriptions cannot convey.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a clear, specific statement: 'Search for details on failed FDIC-insured financial institutions.' It distinguishes itself from sibling tools (like fdic_search_institutions or fdic_get_institution) by focusing specifically on 'failures' and 'failed' institutions, and enumerates failure-specific return data (FAILDATE, RESTYPE, COST) that uniquely identifies this tool's domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides extensive practical guidance through 'Common filter examples' (state, date ranges, resolution types) and explicitly instructs the agent to 'Prefer concise human-readable summaries or tables when answering users.' However, it lacks explicit when-not guidance or named sibling alternatives (e.g., 'use fdic_get_institution for active banks instead').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_search_financialsSearch Institution Financial DataA
Read-onlyIdempotent
Inspect

Search quarterly financial (Call Report) data for FDIC-insured institutions. Covers over 1,100 financial variables reported quarterly.

Returns balance sheet, income statement, capital, and performance ratio data from FDIC Call Reports.

Common filter examples:

  • Financials for a specific bank: CERT:3511

  • By report date: REPDTE:20231231

  • High-profit banks in Q4 2023: REPDTE:20231231 AND ROA:[1.5 TO *]

  • Large banks most recent: ASSET:[10000000 TO *]

  • Negative net income: NETINC:[* TO 0]

Key returned fields:

  • CERT: FDIC Certificate Number

  • REPDTE: Report Date — the last day of the quarterly reporting period (YYYYMMDD)

  • ASSET: Total assets ($thousands)

  • DEP: Total deposits ($thousands)

  • DEPDOM: Domestic deposits ($thousands)

  • INTINC: Total interest income ($thousands)

  • EINTEXP: Total interest expense ($thousands)

  • NETINC: Net income ($thousands)

  • ROA: Return on assets (%)

  • ROE: Return on equity (%)

  • NETNIM: Net interest margin (%)

Args:

  • cert (number, optional): Filter by institution CERT number

  • repdte (string, optional): Report Date in YYYYMMDD format (quarter-end dates: 0331, 0630, 0930, 1231)

  • filters (string, optional): Additional ElasticSearch query filters

  • fields (string, optional): Comma-separated field names (the full set has 1,100+ fields)

  • limit (number): Records to return (default: 20)

  • offset (number): Pagination offset (default: 0)

  • sort_by (string, optional): Field to sort by

  • sort_order ('ASC'|'DESC'): Sort direction (default: 'DESC' recommended for most recent first)

Prefer concise human-readable summaries or tables when answering users. Structured fields are available for totals, pagination, and quarterly financial records.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoFilter by FDIC Certificate Number to get financials for a specific institution
limitNoMaximum number of records to return (1-10000, default: 20)
fieldsNoComma-separated list of FDIC field names to return. Leave empty to return all fields. Field names are ALL_CAPS (e.g., NAME, CERT, ASSET, DEP, STALP). Example: NAME,CERT,ASSET,DEP,STALP
offsetNoNumber of records to skip for pagination (default: 0)
repdteNoFilter by Report Date (REPDTE) in YYYYMMDD format. FDIC data is published quarterly on call report dates: March 31, June 30, September 30, and December 31. Example: 20231231 for Q4 2023. If omitted, returns all available dates (sorted most recent first by default).
filtersNoFDIC API filter using ElasticSearch query string syntax. Combine conditions with AND/OR, use quotes for multi-word values, and [min TO max] for ranges (* = unbounded). Common fields: NAME (institution name), STNAME (state name), STALP (two-letter state code), CERT (certificate number), ASSET (total assets in $thousands), ACTIVE (1=active, 0=inactive). Examples: STNAME:"California", ACTIVE:1 AND ASSET:[1000000 TO *], NAME:"Chase"
sort_byNoField name to sort results by. Example: ASSET, NAME, FAILDATE
sort_orderNoSort direction: DESC (descending, default for most recent first) or ASC (ascending)DESC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). The description adds valuable behavioral context beyond annotations: it discloses the return structure (balance sheet, income statement, capital ratios), lists specific key fields (CERT, REPDTE, ASSET, etc.), notes the data volume (1,100+ fields), and explains default sorting behavior ('DESC recommended for most recent first').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear visual separation: purpose statement, return value description, filter examples, field glossary, and Args reference. While lengthy, every section serves a distinct purpose—the examples demonstrate query syntax and the field list compensates for the missing output schema. Information is front-loaded with the core purpose stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately compensates by enumerating 'Key returned fields' with dollar-unit clarifications ($thousands) and descriptions. It explains the data domain (Call Reports), frequency (quarterly), and scale (1,100+ variables), providing sufficient context for an agent to understand what data structures will be returned from the 8-parameter query surface.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is met. The description adds significant value by providing concrete syntax examples in the Args section (e.g., 'quarter-end dates: 0331, 0630, 0930, 1231' for repdte, ElasticSearch syntax hints for filters, and the recommendation to use 'DESC' for sort_order). These examples and recommendations go beyond the schema's type declarations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool searches 'quarterly financial (Call Report) data for FDIC-insured institutions' with specific scope covering 'over 1,100 financial variables.' It clearly distinguishes itself from sibling analysis tools (fdic_analyze_*) by focusing on raw data retrieval vs. computed analytics, and from other search tools (fdic_search_institutions, fdic_search_failures) by specifying Call Report financial content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description provides excellent 'Common filter examples' showing practical usage patterns (e.g., 'CERT:3511', 'ROA:[1.5 TO *]'), it lacks explicit guidance on when to use this raw data tool versus the specialized analysis siblings (fdic_analyze_bank_health, fdic_compare_peer_health, etc.). The agent must infer that this is for raw financial data while siblings provide processed analytics.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_search_historySearch Institution History / Structure ChangesA
Read-onlyIdempotent
Inspect

Search for structural change events for FDIC-insured financial institutions.

Returns records on mergers, acquisitions, name changes, charter conversions, failures, and other significant structural events.

Common filter examples:

  • History for a specific bank: CERT:3511

  • Mergers: TYPE:merger

  • Failures: TYPE:failure

  • Name changes: CHANGECODE:CO

  • By date range: PROCDATE:[2008-01-01 TO 2009-12-31]

  • By state: PSTALP:CA (two-letter state code)

Event types (TYPE): merger = institution was merged into another failure = institution failed assistance = received FDIC assistance transaction insurance = insurance-related event (new coverage, termination)

Common change codes (CHANGECODE): CO = name change CR = charter conversion DC = deposit assumption change MA = merger/acquisition (absorbed by another institution) NI = new institution insured TC = trust company conversion

Key returned fields:

  • CERT: FDIC Certificate Number

  • INSTNAME: Institution name

  • CLASS: Charter class at time of change

  • PCITY, PSTALP: Location (city, two-letter state code)

  • PROCDATE: Processing date of the change (YYYY-MM-DD)

  • EFFDATE: Effective date of the change (YYYY-MM-DD)

  • ENDEFYMD: End effective date

  • PCERT: Predecessor/successor CERT (for mergers)

  • TYPE: Type of structural change (see above)

  • CHANGECODE: Code for type of change (see above)

  • CHANGECODE_DESC: Human-readable description of the change code

  • INSDATE: Insurance date

Args:

  • cert (number, optional): Filter by institution CERT number

  • filters (string, optional): ElasticSearch query filters

  • fields (string, optional): Comma-separated field names

  • limit (number): Records to return (default: 20)

  • offset (number): Pagination offset (default: 0)

  • sort_by (string, optional): Field to sort by (e.g., PROCDATE)

  • sort_order ('ASC'|'DESC'): Sort direction (default: 'ASC')

Prefer concise human-readable summaries or tables when answering users. Structured fields are available for totals, pagination, and event records.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoFilter by FDIC Certificate Number to get history for a specific institution
limitNoMaximum number of records to return (1-10000, default: 20)
fieldsNoComma-separated list of FDIC field names to return. Leave empty to return all fields. Field names are ALL_CAPS (e.g., NAME, CERT, ASSET, DEP, STALP). Example: NAME,CERT,ASSET,DEP,STALP
offsetNoNumber of records to skip for pagination (default: 0)
filtersNoFDIC API filter using ElasticSearch query string syntax. Combine conditions with AND/OR, use quotes for multi-word values, and [min TO max] for ranges (* = unbounded). Common fields: NAME (institution name), STNAME (state name), STALP (two-letter state code), CERT (certificate number), ASSET (total assets in $thousands), ACTIVE (1=active, 0=inactive). Examples: STNAME:"California", ACTIVE:1 AND ASSET:[1000000 TO *], NAME:"Chase"
sort_byNoField name to sort results by. Example: ASSET, NAME, FAILDATE
sort_orderNoSort direction: ASC (ascending) or DESC (descending)ASC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent status. The description adds significant behavioral context: the ElasticSearch query syntax, enumerated event types (merger, assistance, insurance), change code taxonomy (CO, CR, MA, etc.), and a detailed list of returned fields with their semantics (PROCDATE vs EFFDATE, PCERT for mergers).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear visual separation between examples, event types, change codes, and returned fields. Front-loaded with the core purpose. Minor deduction for the 'Args:' section which redundantly replicates fully-documented schema parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description comprehensively documents 'Key returned fields' with 12+ field explanations, data types (YYYY-MM-DD dates), and query syntax patterns. Adequately covers the complexity of ElasticSearch filtering and pagination for a 7-parameter search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. The 'Args:' section largely restates schema information without adding semantic depth beyond what the schema already provides (e.g., it lists sort_order options but schema already shows enum values).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb+resource+scope: 'Search for structural change events for FDIC-insured financial institutions.' It clearly distinguishes from siblings like fdic_search_institutions (current state) and fdic_search_financials (financial metrics) by focusing specifically on historical structural changes (mergers, name changes, failures).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides extensive 'Common filter examples' with actual syntax (CERT:3511, TYPE:merger, PROCDATE ranges) that implicitly guide when to use specific filters. However, it does not explicitly name alternatives like fdic_search_failures or fdic_get_institution for when historical event data is not needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_search_institutionsSearch FDIC InstitutionsA
Read-onlyIdempotent
Inspect

Search for FDIC-insured financial institutions (banks and savings institutions) using flexible filters.

Returns institution profile data including name, location, charter class, asset size, deposit totals, profitability metrics, and regulatory status.

Common filter examples:

  • By state: STNAME:"California"

  • Active banks only: ACTIVE:1

  • Large banks: ASSET:[10000000 TO *] (assets in $thousands)

  • By bank class: BKCLASS:N (national bank), BKCLASS:SM (state member bank), BKCLASS:NM (state non-member)

  • By name: NAME:"Wells Fargo"

  • Commercial banks: CB:1

  • Savings institutions: MUTUAL:1

  • Recently established: ESTYMD:[2010-01-01 TO *]

Charter class codes (BKCLASS): N = National commercial bank (OCC-supervised) SM = State-chartered, Federal Reserve member NM = State-chartered, non-member (FDIC-supervised) SB = Federal savings bank (OCC-supervised) SA = State savings association OI = Insured branch of foreign bank

Key returned fields:

  • CERT: FDIC Certificate Number (unique ID)

  • NAME: Institution name

  • CITY, STALP (two-letter state code), STNAME (full state name): Location

  • ASSET: Total assets ($thousands)

  • DEP: Total deposits ($thousands)

  • BKCLASS: Charter class code (see above)

  • ACTIVE: 1 if currently active, 0 if inactive

  • ROA, ROE: Profitability ratios

  • OFFICES: Number of branch offices

  • ESTYMD: Establishment date (YYYY-MM-DD)

  • REGAGNT: Primary federal regulator (OCC, FRS, FDIC)

Args:

  • filters (string, optional): ElasticSearch query filter

  • fields (string, optional): Comma-separated field names

  • limit (number): Records to return, 1-10000 (default: 20)

  • offset (number): Pagination offset (default: 0)

  • sort_by (string, optional): Field to sort by

  • sort_order ('ASC'|'DESC'): Sort direction (default: 'ASC')

Prefer concise human-readable summaries or tables when answering users. Structured fields are available for totals, pagination, and institution records.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of records to return (1-10000, default: 20)
fieldsNoComma-separated list of FDIC field names to return. Leave empty to return all fields. Field names are ALL_CAPS (e.g., NAME, CERT, ASSET, DEP, STALP). Example: NAME,CERT,ASSET,DEP,STALP
offsetNoNumber of records to skip for pagination (default: 0)
filtersNoFDIC API filter using ElasticSearch query string syntax. Combine conditions with AND/OR, use quotes for multi-word values, and [min TO max] for ranges (* = unbounded). Common fields: NAME (institution name), STNAME (state name), STALP (two-letter state code), CERT (certificate number), ASSET (total assets in $thousands), ACTIVE (1=active, 0=inactive). Examples: STNAME:"California", ACTIVE:1 AND ASSET:[1000000 TO *], NAME:"Chase"
sort_byNoField name to sort results by. Example: ASSET, NAME, FAILDATE
sort_orderNoSort direction: ASC (ascending) or DESC (descending)ASC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent status. The description adds valuable behavioral context: ElasticSearch query syntax requirements, field naming conventions (ALL_CAPS), asset values denominated in $thousands, and the structure of returned data (pagination, totals, records). It also clarifies charter class codes and regulatory meanings.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While well-structured with clear sections (purpose, examples, return fields, args), the description is verbose. The 'Key returned fields' and 'Charter class codes' sections function as reference documentation. Given the lack of output schema, this information is necessary, but the 'Args' section duplicates schema descriptions already present at 100% coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent completeness given no output schema exists. Documents return fields comprehensively (CERT, ASSET, DEP, etc.), explains the query DSL syntax essential for the filters parameter, and specifies data formats (dates, caps, units). Covers pagination behavior adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds significant value through concrete filter examples demonstrating ElasticSearch syntax (ranges, boolean operators, wildcards) and clarifying the $thousands unit for ASSET fields—critical semantic information not fully conveyed by the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Search') and resource ('FDIC-insured financial institutions'), and clarifies it uses 'flexible filters.' This distinguishes it from siblings like fdic_get_institution (direct lookup) and fdic_analyze_* (calculated metrics).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides extensive filter examples that implicitly guide usage (e.g., 'By state', 'Active banks only'), but lacks explicit guidance on when to use fdic_get_institution vs this search tool, or when to prefer fdic_search_summary for aggregated data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_search_locationsSearch Institution Locations / BranchesA
Read-onlyIdempotent
Inspect

Search for branch locations of FDIC-insured financial institutions.

Returns branch/office data including address, city, state, coordinates, branch type, and establishment date.

Common filter examples:

  • All branches of a bank: CERT:3511

  • By state: STALP:TX (two-letter state code)

  • By city: CITY:"Austin"

  • Main offices only: BRNUM:0

  • By county: COUNTY:"Travis"

  • Active branches only: ENDEFYMD:[9999-01-01 TO *] (sentinel date 9999-12-31 means still open)

  • By metro area (CBSA): CBSA_METRO_NAME:"New York-Newark-Jersey City"

Branch service types (BRSERTYP): 11 = Full service brick and mortar 12 = Full service retail 21 = Limited service administrative 22 = Limited service military 23 = Limited service drive-through 24 = Limited service loan production 25 = Limited service consumer/trust 26 = Limited service Internet/mobile 29 = Limited service other

Key returned fields:

  • CERT: FDIC Certificate Number

  • UNINAME: Institution name

  • NAMEFULL: Full branch name

  • ADDRESS, CITY, STALP (two-letter state code), ZIP: Branch address

  • COUNTY: County name

  • BRNUM: Branch number (0 = main office)

  • BRSERTYP: Branch service type code (see above)

  • LATITUDE, LONGITUDE: Geographic coordinates

  • ESTYMD: Branch established date (YYYY-MM-DD)

  • ENDEFYMD: Branch end date (9999-12-31 if still active)

Args:

  • cert (number, optional): Filter by institution CERT number

  • filters (string, optional): Additional ElasticSearch query filters

  • fields (string, optional): Comma-separated field names

  • limit (number): Records to return (default: 20)

  • offset (number): Pagination offset (default: 0)

  • sort_by (string, optional): Field to sort by

  • sort_order ('ASC'|'DESC'): Sort direction (default: 'ASC')

Prefer concise human-readable summaries or tables when answering users. Structured fields are available for totals, pagination, and branch location records.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoFilter by FDIC Certificate Number to get all branches of a specific institution
limitNoMaximum number of records to return (1-10000, default: 20)
fieldsNoComma-separated list of FDIC field names to return. Leave empty to return all fields. Field names are ALL_CAPS (e.g., NAME, CERT, ASSET, DEP, STALP). Example: NAME,CERT,ASSET,DEP,STALP
offsetNoNumber of records to skip for pagination (default: 0)
filtersNoFDIC API filter using ElasticSearch query string syntax. Combine conditions with AND/OR, use quotes for multi-word values, and [min TO max] for ranges (* = unbounded). Common fields: NAME (institution name), STNAME (state name), STALP (two-letter state code), CERT (certificate number), ASSET (total assets in $thousands), ACTIVE (1=active, 0=inactive). Examples: STNAME:"California", ACTIVE:1 AND ASSET:[1000000 TO *], NAME:"Chase"
sort_byNoField name to sort results by. Example: ASSET, NAME, FAILDATE
sort_orderNoSort direction: ASC (ascending) or DESC (descending)ASC
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Substantially expands on annotations (which only indicate read-only/idempotent status) by documenting: sentinel date patterns (9999-12-31 = active), branch service type code mappings (11-29), field semantics (BRNUM:0 = main office), and output preferences ('concise human-readable summaries'). Also clarifies pagination behavior despite no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: purpose, filter examples, service type codes, return fields, and Args mapping. Every section earns its place given the cryptic FDIC field names (STALP, BRSERTYP). Minor deduction for length—while justified by complexity, some redundancy exists between Args section and schema descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent compensation for missing output schema. Exhaustively documents return fields including data types (coordinates, dates), field meanings (UNINAME vs NAMEFULL), and value interpretations (sentinel dates, type codes). Also notes pagination controls (totals, offset) and response format preferences.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value through concrete filter syntax examples (e.g., 'ENDEFYMD:[9999-01-01 TO *]') that demonstrate how to construct complex ElasticSearch queries for the filters parameter, complementing the schema's generic syntax description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Search for branch locations' and clearly identifies the resource as 'FDIC-insured financial institutions.' This distinguishes it from siblings like fdic_search_institutions (institution-level data) and fdic_franchise_footprint (analytical aggregation), establishing a clear scope focused on physical branch enumeration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides extensive concrete filter examples (CERT:3511, STALP:TX, BRNUM:0) demonstrating ElasticSearch query syntax and common use cases. However, it lacks explicit sibling differentiation—stating when to prefer this over fdic_search_institutions or fdic_franchise_footprint for geographic analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_search_sodSearch Summary of Deposits (SOD)A
Read-onlyIdempotent
Inspect

Search annual Summary of Deposits (SOD) data for individual bank branches.

The SOD report provides annual deposit data at the branch level, showing deposit balances for each office of every FDIC-insured institution as of June 30 each year.

Common filter examples:

  • All branches for a bank: CERT:3511

  • SOD for specific year: YEAR:2022

  • Branches in a state: STALPBR:CA

  • Branches in a city: CITYBR:"Austin"

  • High-deposit branches: DEPSUMBR:[1000000 TO *]

  • By metro area (MSA code): MSABR:19100

Key returned fields:

  • YEAR: Report year (as of June 30)

  • CERT: FDIC Certificate Number

  • BRNUM: Branch number (0 = main office)

  • NAMEFULL: Branch or institution name

  • ADDRESBR, CITYBR, STALPBR, ZIPBR: Branch address

  • DEPSUMBR: Total deposits at branch ($thousands)

  • MSABR: Metropolitan Statistical Area code (numeric; 0 = non-MSA)

  • LATITUDE, LONGITUDE: Coordinates

Args:

  • cert (number, optional): Filter by institution CERT number

  • year (number, optional): SOD report year (1994-present)

  • filters (string, optional): Additional ElasticSearch query filters

  • fields (string, optional): Comma-separated field names

  • limit (number): Records to return (default: 20)

  • offset (number): Pagination offset (default: 0)

  • sort_by (string, optional): Field to sort by (e.g., DEPSUMBR, YEAR)

  • sort_order ('ASC'|'DESC'): Sort direction (default: 'ASC')

Prefer concise human-readable summaries or tables when answering users. Structured fields are available for totals, pagination, and deposit records.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoFilter by FDIC Certificate Number
yearNoFilter by specific year (1994-present). SOD data is annual.
limitNoMaximum number of records to return (1-10000, default: 20)
fieldsNoComma-separated list of FDIC field names to return. Leave empty to return all fields. Field names are ALL_CAPS (e.g., NAME, CERT, ASSET, DEP, STALP). Example: NAME,CERT,ASSET,DEP,STALP
offsetNoNumber of records to skip for pagination (default: 0)
filtersNoFDIC API filter using ElasticSearch query string syntax. Combine conditions with AND/OR, use quotes for multi-word values, and [min TO max] for ranges (* = unbounded). Common fields: NAME (institution name), STNAME (state name), STALP (two-letter state code), CERT (certificate number), ASSET (total assets in $thousands), ACTIVE (1=active, 0=inactive). Examples: STNAME:"California", ACTIVE:1 AND ASSET:[1000000 TO *], NAME:"Chase"
sort_byNoField name to sort results by. Example: ASSET, NAME, FAILDATE
sort_orderNoSort direction: ASC (ascending) or DESC (descending)ASC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent status, freeing the description to focus on data semantics: it explains the June 30 reporting date, deposit units ($thousands), and MSA coding scheme (0=non-MSA). It also discloses key return fields, partially compensating for the missing output schema. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear visual hierarchy: purpose → data definition → filter examples → return fields → parameters → output preference. The length is justified by the need to document ElasticSearch query syntax and return fields, though the Args section partially duplicates schema content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately compensates by listing key returned fields and their meanings. It covers temporal scope (1994-present), pagination (limit/offset), and complex filter syntax. Missing only error-handling or rate-limiting details for a perfect 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all 8 parameters have detailed descriptions in the JSON schema). The Args section in the description provides condensed labels but adds minimal semantic depth beyond the schema (e.g., doesn't add format examples for 'fields' or explain ElasticSearch syntax beyond the schema). Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Search') and resource ('Summary of Deposits (SOD) data'), immediately clarifying scope ('individual bank branches'). It distinguishes from sibling institution-level tools (e.g., fdic_search_institutions) by emphasizing 'branch level' data and the annual June 30 reporting period.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides extensive 'Common filter examples' that implicitly demonstrate when to use the tool (e.g., filtering by CERT, YEAR, geography). However, it lacks explicit guidance contrasting this tool with siblings like fdic_search_institutions or fdic_search_locations, and contains no 'when-not-to-use' warnings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_search_summarySearch Annual Financial Summary DataA
Read-onlyIdempotent
Inspect

Search aggregate financial and structure summary data subtotaled by year for FDIC-insured institutions.

Returns annual snapshots of key financial metrics — useful for tracking an institution's growth over time.

Common filter examples:

  • Annual history for a bank: CERT:3511

  • Specific year: YEAR:2022

  • Year range: YEAR:[2010 TO 2020]

  • Large banks in 2022: YEAR:2022 AND ASSET:[10000000 TO *]

  • Profitable in 2023: YEAR:2023 AND ROE:[10 TO *]

Key returned fields:

  • CERT: FDIC Certificate Number

  • YEAR: Report year

  • ASSET: Total assets ($thousands)

  • DEP: Total deposits ($thousands)

  • NETINC: Net income ($thousands)

  • ROA: Return on assets (%)

  • ROE: Return on equity (%)

  • OFFICES: Number of branch offices

  • REPDTE: Report Date — the last day of the reporting period (YYYYMMDD)

Args:

  • cert (number, optional): Filter by institution CERT number

  • year (number, optional): Filter by specific year (1934-present)

  • filters (string, optional): Additional ElasticSearch query filters

  • fields (string, optional): Comma-separated field names

  • limit (number): Records to return (default: 20)

  • offset (number): Pagination offset (default: 0)

  • sort_by (string, optional): Field to sort by (e.g., YEAR, ASSET)

  • sort_order ('ASC'|'DESC'): Sort direction (default: 'ASC')

Prefer concise human-readable summaries or tables when answering users. Structured fields are available for totals, pagination, and annual summary records.

ParametersJSON Schema
NameRequiredDescriptionDefault
certNoFilter by FDIC Certificate Number
yearNoFilter by specific year (e.g., 2022)
limitNoMaximum number of records to return (1-10000, default: 20)
fieldsNoComma-separated list of FDIC field names to return. Leave empty to return all fields. Field names are ALL_CAPS (e.g., NAME, CERT, ASSET, DEP, STALP). Example: NAME,CERT,ASSET,DEP,STALP
offsetNoNumber of records to skip for pagination (default: 0)
filtersNoFDIC API filter using ElasticSearch query string syntax. Combine conditions with AND/OR, use quotes for multi-word values, and [min TO max] for ranges (* = unbounded). Common fields: NAME (institution name), STNAME (state name), STALP (two-letter state code), CERT (certificate number), ASSET (total assets in $thousands), ACTIVE (1=active, 0=inactive). Examples: STNAME:"California", ACTIVE:1 AND ASSET:[1000000 TO *], NAME:"Chase"
sort_byNoField name to sort results by. Example: ASSET, NAME, FAILDATE
sort_orderNoSort direction: ASC (ascending) or DESC (descending)ASC
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnly, idempotent, non-destructive), the description adds crucial behavioral context: monetary fields are in '$thousands', REPDTE format is 'YYYYMMDD', and it defines the report date as 'the last day of the reporting period'. It also clarifies the temporal nature of the aggregation (annual snapshots).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The structure is logical (purpose → examples → return fields → parameters), but the 'Args' section is redundant given the 100% schema coverage. While the 'Key returned fields' section compensates for the missing output schema, the description is longer than necessary due to parameter duplication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description comprehensively documents return values (8 key fields with definitions and units) and explains pagination controls. It adequately covers the complex ElasticSearch filter syntax through examples, making the tool invocable despite the lack of structured output definition.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant value through the 'Common filter examples' section, which demonstrates ElasticSearch query syntax (e.g., YEAR:[2010 TO 2020], ASSET:[10000000 TO *]) that is critical for constructing valid `filters` parameter values. The 'Args' section largely duplicates the schema and adds minimal value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the tool's scope with specific verbs ('Search') and resources ('aggregate financial and structure summary data'). It effectively distinguishes this tool from siblings like `fdic_search_financials` by emphasizing 'subtotaled by year' and 'annual snapshots', though it lacks explicit contrast statements like 'unlike X, use this for Y'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides strong implied usage guidance through 'Common filter examples' and notes it's 'useful for tracking an institution's growth over time'. However, it lacks explicit guidelines on when NOT to use the tool or direct comparisons to alternatives (e.g., when to use `fdic_search_financials` instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fdic_ubpr_analysisUBPR-Equivalent Ratio AnalysisA
Read-onlyIdempotent
Inspect

Compute UBPR-equivalent ratio analysis for an FDIC-insured institution. Includes summary ratios (ROA, ROE, NIM, efficiency), loan mix, capital adequacy, liquidity metrics, and year-over-year growth rates. Ratios are computed from Call Report data and are UBPR-equivalent, not official FFIEC UBPR output.

Output includes:

  • Summary ratios: ROA, ROE, NIM, efficiency ratio, pretax ROA

  • Loan mix: real estate, commercial, consumer, agricultural shares

  • Capital adequacy: Tier 1 leverage, Tier 1 risk-based, equity ratio

  • Liquidity: loan-to-deposit, core deposit ratio, brokered deposits, cash ratio

  • Year-over-year growth: assets, loans, deposits

  • Structured JSON for programmatic consumption

NOTE: This is an analytical tool based on public financial data.

ParametersJSON Schema
NameRequiredDescriptionDefault
certYesFDIC Certificate Number
repdteNoReport date (YYYYMMDD). Defaults to most recent quarter.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context beyond annotations: clarifies data provenance ('computed from Call Report data'), distinguishes from authoritative sources ('UBPR-equivalent, not official'), and describes output structure ('Structured JSON for programmatic consumption'). Annotations already confirm read-only/idempotent safety, so description appropriately focuses on data lineage and output format rather than repeating safety properties.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded in first sentence, followed by categorized output lists. Length is appropriate for complexity—detailed ratio enumeration is necessary given no output schema exists. Minor deduction for slightly redundant 'NOTE' sentence that repeats 'analytical tool' concept already implied by content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent compensation for missing output schema: exhaustively documents return values across six categories (summary ratios, loan mix, capital adequacy, liquidity, growth rates, JSON structure). Combined with annotations covering behavioral hints, provides complete picture of what the tool returns and how it behaves. No gaps remain for agent invocation decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both 'cert' (FDIC Certificate Number) and 'repdte' (report date with format YYYYMMDD). Description mentions 'FDIC-insured institution' which loosely maps to cert parameter, but adds no explicit parameter guidance. With full schema coverage, baseline score applies; description neither adds nor detracts from parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb 'Compute' and clear resource 'UBPR-equivalent ratio analysis for an FDIC-insured institution.' Distinguishes scope from siblings by specifying exact ratio categories (ROA, ROE, NIM, efficiency) and noting it covers 'summary ratios, loan mix, capital adequacy, liquidity metrics' unlike sibling tools focused on credit concentration or funding profiles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through detailed enumeration of output ratios, allowing inference that this tool is appropriate when UBPR-style metrics are needed. However, lacks explicit when-to-use guidance versus alternatives like 'fdic_analyze_bank_health' or fdic_peer_group_analysis, and only implicitly notes limitations ('not official FFIEC UBPR output') without stating when to prefer official sources.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources