Skip to main content
Glama
jflamb

FDIC BankFind MCP Server

by jflamb

Compare Peer Health (CAMELS Rankings)

fdic_compare_peer_health
Read-onlyIdempotent

Compare CAMELS-like health scores across FDIC-insured institutions by explicit CERT list, state, or asset range. Optionally highlight a subject bank's position in the peer ranking. Provides proxy scores and percentiles for peer analysis.

Instructions

Compare CAMELS-style health scores across a group of FDIC-insured institutions.

Three usage modes:

  • Explicit list: provide certs (up to 50) for a specific comparison set

  • State-wide scan: provide state to compare all active institutions in that state

  • Asset-based: provide asset_min/asset_max to compare institutions by size

Optionally provide cert to highlight a subject institution's position in the ranking.

Output: structuredContent includes {model, official_status, report_date, institutions, metrics, peer_context, proxy_summary, proxy, deprecations}. Institutions include proxy scores and name_source. When a subject cert is provided, metrics[] is the preferred subject-vs-peer array for new UI bindings and proxy_summary is a flattened subject proxy. peer_context.subject_percentiles is deprecated, remains for backward compatibility, and is targeted for removal only in a future coordinated major release. Auto-peer selection derives asset bands from report-date financials and broadens the cohort if fewer than 10 peers match.

NOTE: Public off-site analytical proxy — not official supervisory ratings.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
certNoSubject institution CERT to highlight in the ranking. Optional.
certsNoExplicit list of CERTs to compare (max 50).
stateNoTwo-letter state code to select all active institutions (e.g., "WY").
asset_minNoMinimum total assets ($thousands) for peer selection.
asset_maxNoMaximum total assets ($thousands) for peer selection.
repdteNoReport Date (YYYYMMDD). Defaults to the most recent quarter.
sort_byNoSort results by composite or a specific CAMELS component rating.composite
limitNoMax institutions to return in the response.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYes
official_statusYes
proxyNo
proxy_summaryYes
report_dateYes
sort_byYes
total_institutionsYes
returned_countYes
subject_certYes
subject_rankYes
metricsYes
institutionsYes
deprecationsYes
peer_contextYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate a safe, read-only, idempotent operation. The description adds critical context: it's a public off-site analytical proxy, not official supervisory ratings. It also details output structure and deprecation warnings.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is informative but quite lengthy, with detailed output and deprecation notes. While well-structured, it could be more concise without losing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (8 optional parameters, multiple modes, detailed output schema), the description covers all key aspects: usage, output fields, peer selection logic, and disclaimers. It is complete for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter described. The description adds value by explaining how parameters relate to the three usage modes and the optional subject cert, enriching the context beyond individual parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares CAMELS-style health scores across FDIC-insured institutions, with three specific usage modes. This distinguishes it from sibling tools like fdic_peer_group_analysis or fdic_compare_bank_snapshots.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Three explicit usage modes (explicit list, state-wide, asset-based) provide clear guidance on when to use each. While it doesn't directly mention alternatives, the modes effectively partition use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jflamb/fdic-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server