American Default Research
Server Details
Read-only MCP for U.S. household distress: 96 indicators, ADI composite, 3,144 counties.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 5 of 5 tools scored.
Each tool targets a distinct aspect: composite ADI, county-level data, cross-correlations, individual indicator snapshot, and indicator search. There is no overlap in functionality.
All tools follow a consistent verb_noun snake_case pattern: get_adi_composite, get_county_scorecard, get_cross_correlations, get_indicator, search_indicators. The pattern is uniform and predictable.
With 5 tools, the server is well-scoped for a research data API. Each tool serves a clear and necessary function without redundancy or clutter.
The tool surface covers core operations: retrieving composite, county, cross-correlation, and individual indicator data, plus search. A minor gap is the lack of a tool for historical series, but this is provided via an external URL, and the search tool compensates for finding indicators.
Available Tools
5 toolsget_adi_compositeAInspect
Fetch the latest quarterly reading of the American Distress Index (ADI) composite. Returns composite score (0-100), zone, composite Z-score, and the 5-component breakdown (Buffer, Debt Stress, Financial Conditions, Cost Pressure, Labor) with Z-scores and point contributions. Updated quarterly.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of disclosure. It informs the agent that the tool returns a composite score (0-100), zone, Z-score, and a 5-component breakdown with Z-scores and contributions, and that it is updated quarterly. This is thorough for a read-only fetch, but it does not mention any potential side effects or additional behaviors (though none are expected).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with two sentences that front-load the main purpose and then provide specific details about the returned data structure. Every sentence adds value without redundancy or unnecessary length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters, no output schema), the description is complete. It details what is returned (composite score, zone, Z-score, and five components) and the update frequency (quarterly). There is no missing information needed for an agent to correctly invoke and interpret the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and the input schema is empty with 100% coverage. The description does not need to add parameter information because there are none. According to the rubric, zero parameters warrant a baseline score of 4, as the description does not repeat schema details but confirms the no-parameter nature implicitly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Fetch' and the specific resource 'the latest quarterly reading of the American Distress Index (ADI) composite'. It lists exactly what is returned, distinguishing it from sibling tools that focus on county scorecards, cross correlations, individual indicators, or searching. This is a specific verb+resource combination with no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description makes the purpose clear and implies when to use this tool (when the latest ADI composite is needed). However, it does not explicitly exclude scenarios or mention alternative tools for similar needs. Given that the tool is a simple fetch with no parameters, the context is straightforward, but explicit guidelines are absent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_county_scorecardAInspect
Fetch a county's County Distress Index (CDI) scorecard by 5-digit FIPS code. Returns composite score (0-100), zone (Healthy / Normal / Elevated / Serious / Crisis), national + state rank, 5-domain breakdown (Consumer Credit Distress, Housing Cost Burden, Structural Poverty, Economic Vitality, Legal Distress), key findings, and pre-baked APA / MLA / Chicago / news-copy citations. Accepts 4-digit FIPS with implicit leading zero. 3,144 counties available.
| Name | Required | Description | Default |
|---|---|---|---|
| fips | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It comprehensively lists the output fields (composite score, zone, ranks, 5 domains, findings, citations), which informs the agent about what to expect. It does not state read-only or destructive behavior, but fetching is inherently read-only. The input flexibility is also disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences), front-loaded with the core purpose, and each sentence adds value: purpose, detailed output, input nuance. No redundant or vague phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no annotations and no output schema, the description covers all necessary information: what the tool does, how to use it (FIPS code format), what it returns (detailed list), and general scope. It is complete enough for an agent to correctly invoke and interpret the result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage for the single parameter 'fips'. The description adds crucial meaning: it specifies the expected format (5-digit FIPS, accepts 4-digit with leading zero) and notes the scope (3,144 counties). This compensates well for the lack of schema description, though validation or sourcing info is absent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Fetch', the resource 'county's County Distress Index (CDI) scorecard', and the input method 'by 5-digit FIPS code'. It also lists specific return fields, distinguishing it from sibling tools like 'get_adi_composite' or 'get_indicator' which likely focus on different data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates usage for retrieving a full CDI scorecard for a county, and notes the input format flexibility (4-digit FIPS with implicit leading zero). However, it does not explicitly state when to use this tool over siblings or mention any prerequisites or exclusions, leaving room for improvement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_cross_correlationsAInspect
Fetch statistically-validated leading/lagging relationships for an indicator. Source: the five-filter leading-indicator scanner (cross-correlation → first-differenced CCF → multi-crisis validation → Granger causality → out-of-sample validation). Returns two lists: as_leader (pairs where this indicator precedes its follower) and as_follower (pairs where another indicator precedes this one). Only fully-validated pairs are included — partial matches are not surfaced. Most of the 96 indicators return empty lists; currently six pairs clear the full gauntlet.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden and excels: it details the five-filter validation process, discloses that only fully-validated pairs are returned, notes that most indicators return empty lists, and describes the two output lists. No contradictions or omissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each serving a clear purpose: purpose, output structure, and caveat. No redundant phrases, front-loaded with the main action, and efficiently packed with behavioral details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and moderate complexity, the description fully covers what the tool does, how results are structured, the validation rigor, and the expected sparsity of results. No missing context for an agent to decide whether to invoke it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'slug' has 0% schema description coverage. The description only implies it identifies an indicator ('for an indicator') but does not explicitly define its meaning or format. It adds minimal value beyond the schema's basic type 'string'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches leading/lagging relationships for an indicator, using a specific verb 'fetch' and resource 'indicator'. It distinguishes itself from sibling tools (get_adi_composite, get_county_scorecard, etc.) by its unique focus on cross-correlations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the validation pipeline and output format but does not provide explicit guidance on when to use this tool instead of siblings. It implies usage for temporal relationship analysis but lacks when-not-to-use or alternative tool recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_indicatorAInspect
Fetch a compact snapshot of an American Default economic indicator by slug. Returns latest value, unit, frequency, direction, pre-computed aggregates (period averages, extremes, sustained runs), editorial prose (when available), and canonical APA / MLA / Chicago / news-copy citations. Raw historical series is NOT included — use https://americandefault.org/api/indicators/{slug}.json for the full data. Slug examples: 'the-buffer' (personal savings rate), 'mortgage-delinquency', 'initial-unemployment-claims-sa'.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the burden. It discloses that raw historical series is NOT included, which is a key behavioral trait. It also mentions the return elements (latest value, unit, frequency, etc.). No mention of authentication or rate limits, but for a read-only fetch this is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured paragraph that begins with the main purpose, then details return contents, explicitly states what is not included, and ends with examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there is no output schema, the description thoroughly explains the return structure (latest value, unit, frequency, direction, aggregates, prose, citations) and clarifies the exclusion of raw historical data. For a one-parameter tool, this is remarkably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter 'slug' with 0% coverage. The description adds meaning by providing slug examples ('the-buffer', 'mortgage-delinquency') and explaining that slugs identify indicators, thus helping the agent construct valid values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Fetch'), identifies the resource ('compact snapshot of an American Default economic indicator by slug'), and lists what is returned. It includes slug examples and distinguishes from the full raw data API, clearly differentiating from siblings like 'get_adi_composite'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool (for a compact snapshot with pre-computed aggregates and citations) and when not (for raw historical series, use the direct URL). It does not directly compare to sibling tools, but the context of sibling names implies different purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_indicatorsAInspect
Search the 96-indicator registry by keyword. Returns ranked matches (up to limit, default 10, max 50) with slug, branded name, underlying name, category, and canonical URL. Scoring is substring+prefix over slug, branded_name, name, and category — e.g. query 'savings' returns both The Buffer (personal saving rate) and The Safety Net (emergency savings survey). Use this when you want to discover which slug corresponds to a concept before calling get_indicator.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description fully covers behavior: scoring method (substring+prefix), limit behavior (default and max), return fields, and an example query.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three focused sentences with no wasted words; front-loaded with action and immediately followed by key behavioral and usage details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema and low schema coverage, the description provides all needed context: return structure, scoring logic, limit constraints, and an illustrative example.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description adds critical meaning: explains `query` is a keyword and `limit` has defaults and max, far beyond the schema's basic type and default.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as searching a registry by keyword and distinguishes it from the sibling tool `get_indicator` by stating it helps discover which slug to use before calling the latter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool ('discover which slug corresponds to a concept before calling get_indicator'), but does not provide when-not or alternatives beyond that sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!