Census ACS Demographics
Server Details
Population, income, poverty, education, housing, and commuting from the US Census ACS
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsget_commuting_dataAInspect
Get means of transportation to work data for counties.
Returns worker counts and percentages for: drove alone, carpooled,
public transit, walked, bicycle, taxi/motorcycle/other, and worked from home.
Args:
state: Two-letter state abbreviation (e.g. 'WA', 'CA') or 2-digit FIPS code.
county_fips: Three-digit county FIPS code (e.g. '033' for King County).
Omit to get all counties in the state.
year: ACS 5-year estimate year (default 2022).| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| state | Yes | ||
| county_fips | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data source (ACS 5-year estimates), default year (2022), and batch retrieval behavior when county_fips is omitted; no annotations exist to contradict.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear separation between purpose statement and Args section; every sentence provides specific value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given the tool's narrow scope and existing output schema; covers the ACS data source and all parameter behaviors sufficiently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Essential compensation for 0% schema description coverage by providing detailed parameter semantics, examples ('WA', '033'), and default behaviors for all three arguments.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves 'means of transportation to work data' with specific categories listed; implicitly distinguishes from demographic/economic siblings by focusing on commuting modalities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides operational guidance on omitting county_fips to retrieve statewide data, but lacks explicit guidance on when to choose this tool over sibling tools like get_county_demographics.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_county_demographicsAInspect
Get demographic data for counties: population, median age, race, Hispanic origin, income, and poverty.
Returns one record per county with total population, median age, racial breakdown
(White, Black, American Indian, Asian, Pacific Islander, Other, Two+),
Hispanic/Latino percentage, median household income, and poverty rate.
Args:
state: Two-letter state abbreviation (e.g. 'WA', 'CA') or 2-digit FIPS code.
county_fips: Three-digit county FIPS code (e.g. '033' for King County).
Omit to get all counties in the state.
year: ACS 5-year estimate year (default 2022). Data covers year-4 through year.| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| state | Yes | ||
| county_fips | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses ACS 5-year data source, temporal coverage ('year-4 through year'), and return structure ('one record per county') beyond what annotations provide (none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose, clear Args section, and no redundant text; every sentence provides necessary information not found in structured fields.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive coverage of inputs and outputs given the simple parameter structure; appropriately handles the lack of schema descriptions and presence of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage; description fully compensates with detailed semantics, examples (e.g., 'WA', '033'), and behavior notes for all three parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') + resource ('demographic data for counties') and explicitly lists demographic fields to distinguish from siblings (economics, education, commuting).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies differentiation through return value descriptions and notes that county_fips can be omitted for bulk retrieval, but lacks explicit 'when to use vs alternatives' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_county_economicsAInspect
Get economic data for counties: income, poverty, home values, rent, and health insurance.
Returns median household income, poverty rate, median home value, median gross rent,
and health insurance coverage rates (insured vs uninsured).
Args:
state: Two-letter state abbreviation (e.g. 'WA', 'CA') or 2-digit FIPS code.
county_fips: Three-digit county FIPS code (e.g. '033' for King County).
Omit to get all counties in the state.
year: ACS 5-year estimate year (default 2022).| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| state | Yes | ||
| county_fips | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations to contradict; description carries burden well by explaining batch behavior (omit county_fips to get all counties) and identifying data source as ACS 5-year estimates.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear hierarchy (summary → returns → args), front-loaded purpose, and no redundancy despite inclusion of return values (which is acceptable given output schema exists).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a 3-parameter tool; covers required parameter constraints and optional parameter defaults, though could briefly note Census Bureau as the ACS data source.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage; description fully compensates with formats (Two-letter/FIPS), examples ('033' for King County), and default behaviors for all three parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb+resource ('Get economic data') and lists distinct data types (income, poverty, home values) that clearly differentiate from sibling tools covering demographics, education, and commuting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through specificity of economic metrics but lacks explicit guidance on when to choose this over get_county_demographics or get_county_education.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_county_educationAInspect
Get educational attainment for counties (population 25+).
Returns counts and percentages for: less than high school, high school diploma/GED,
some college/associate degree, bachelor's degree, and graduate/professional degree.
Args:
state: Two-letter state abbreviation (e.g. 'WA', 'CA') or 2-digit FIPS code.
county_fips: Three-digit county FIPS code (e.g. '033' for King County).
Omit to get all counties in the state.
year: ACS 5-year estimate year (default 2022).| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| state | Yes | ||
| county_fips | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key behaviors: ACS 5-year estimate source, population 25+ filter, and batch retrieval when county_fips is omitted (defaults to null).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient structure with purpose front-loaded, followed by return value summary and Args section; no redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete given output schema exists; summarizes return categories without duplicating schema structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Comprehensive compensation for 0% schema description coverage by providing formats, examples (WA, 033), and default behavior for all three parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific purpose (educational attainment for counties, population 25+) but lacks explicit differentiation from sibling demographic tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through data specificity and notes omitting county_fips returns all counties, but lacks explicit when-to-use vs alternatives guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_tract_dataAInspect
Get tract-level ACS data for any variables within a county.
This is a flexible tool for querying any ACS 5-year estimate variables at
the census tract level. Automatically batches requests if more than 50
variables are requested.
Common variable examples:
- B01001_001E: Total population
- B19013_001E: Median household income
- B17001_002E: Population below poverty level
- B25077_001E: Median home value
- B02001_002E-008E: Race breakdown
Args:
state: Two-letter state abbreviation (e.g. 'WA') or 2-digit FIPS code.
county_fips: Three-digit county FIPS code (e.g. '033' for King County, WA).
variables: Comma-separated ACS variable codes (e.g. 'B01001_001E,B19013_001E').
NAME is always included automatically.
year: ACS 5-year estimate year (default 2022).| Name | Required | Description | Default |
|---|---|---|---|
| year | No | ||
| state | Yes | ||
| variables | Yes | ||
| county_fips | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses important behavioral traits not in schema: automatic batching for >50 variables and that NAME field is always included automatically.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose, bulleted examples, and clear Args section; slightly verbose with 'flexible tool' fluff but every section adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive given input complexity; correctly omits output details since output schema exists, covers all parameters thoroughly, and includes helpful variable examples.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage; Args section provides detailed formats, examples (WA, 033), and constraints for every parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves tract-level ACS data with specific verb and resource, implicitly distinguishing from county-level siblings via geographic specificity, though lacks explicit sibling comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context through geographic level specification (tract vs county), but lacks explicit when-to-use/when-not-to-use guidance relative to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!