Skip to main content
Glama

Census County Business Patterns

Server Details

Establishment counts, employment, and payroll by geography and NAICS code from Census CBP

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

3 tools
compare_county_industriesAInspect

Get a breakdown of all industries in a county.

Returns business pattern data for all 2-digit NAICS sectors in a
specific county. Useful for understanding the economic composition
of a community for grant narratives.

Args:
    state: Two-letter state abbreviation (e.g. 'CA', 'TX', 'NY').
    county_fips: 3-digit county FIPS code (e.g. '037' for Los Angeles County).
    year: Data year (default 2021). Available: 2012-2021.
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
stateYes
county_fipsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully documents temporal coverage (2012-2021) and data scope (2-digit NAICS sectors), but omits critical behavioral details such as data source/freshness, rate limits, or authentication requirements that would help an agent understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a front-loaded purpose statement, followed by behavioral details, use-case guidance, and a clearly delineated Args section. No sentences are wasted; every line provides actionable information not redundant with the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately avoids duplicating return value documentation. It adequately covers all three input parameters and data scope. Minor gap remains in clarifying the comparative aspect implied by the tool name 'compare_county_industries' relative to its sibling 'get_county_business_patterns.'

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage (titles only), but the Args section fully compensates by providing format specifications and examples for all three parameters: state ('Two-letter... e.g. CA'), county_fips ('3-digit... e.g. 037'), and year ('default 2021. Available: 2012-2021').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Get[s] a breakdown of all industries' and specifies '2-digit NAICS sectors,' providing a specific verb and resource. However, it does not explicitly differentiate from the similarly-named sibling 'get_county_business_patterns,' leaving ambiguity about why this tool is named 'compare' versus 'get.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly identifies the use case as 'understanding the economic composition of a community for grant narratives,' providing clear positive guidance. However, it lacks 'when-not-to-use' guidance or explicit mention of sibling alternatives (e.g., when to use 'get_county_business_patterns' instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_county_business_patternsAInspect

Get business establishment, employment, and payroll data by county.

Returns County Business Patterns data from the Census Bureau showing
the number of business establishments, employees, and annual payroll
for a given geography and optional industry filter.

Args:
    state: Two-letter state abbreviation (e.g. 'CA', 'TX', 'NY').
    county_fips: 3-digit county FIPS code (e.g. '037' for Los Angeles).
        If omitted, returns data for all counties in the state.
    naics_code: NAICS 2017 industry code to filter by (e.g. '72' for
        Accommodation/Food, '62' for Healthcare, '23' for Construction).
        If omitted, returns totals across all industries.
    year: Data year (default 2021). Available: 2012-2021.
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
stateYes
naics_codeNo
county_fipsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully identifies the Census Bureau as the upstream data source and lists the specific data points returned (establishments, employees, payroll). However, it omits operational details like rate limits, data caching behavior, or freshness guarantees that would fully characterize the tool's runtime behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description follows an effective structure: single-sentence purpose, single-sentence elaboration, then structured Args documentation. Every section earns its place. The only minor inefficiency is slight redundancy between the first sentence and the opening of the second sentence ('Returns County Business Patterns data' restates the implicit return type from 'Get').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema (which absolves the description from detailing return values) and the presence of 4 parameters with complex domain requirements (NAICS codes, FIPS codes), the description achieves appropriate completeness. The Args section covers all parameters; the narrative explains the data domain. Only missing operational metadata (rate limits, auth) prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% input schema description coverage, the Args section comprehensively documents all 4 parameters with format specifications (two-letter, 3-digit), concrete examples ('CA', '037', '72' for Accommodation/Food), valid ranges (2012-2021), and default values (year=2021). This perfectly compensates for the schema deficiency and adds significant value beyond the structured fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb-resource combination ('Get business establishment, employment, and payroll data by county') and identifies the Census Bureau data source. It distinguishes implicitly from sibling 'get_state_business_summary' via the 'by county' scope, but lacks explicit differentiation from 'compare_county_industries' which suggests analytical capabilities this tool may not provide.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage patterns through the Args section (e.g., omitting county_fips returns all counties), but provides no explicit guidance on when to select this tool versus the comparison-focused sibling or the state-level alternative. It documents optional filters well but not selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_state_business_summaryAInspect

Get state-level business pattern summary.

Returns aggregate establishment, employment, and payroll data for an
entire state, optionally filtered by industry. Useful for state-level
economic overviews in grant applications.

Args:
    state: Two-letter state abbreviation (e.g. 'CA', 'TX', 'NY').
    naics_code: NAICS 2017 industry code to filter by (e.g. '72' for
        Accommodation/Food, '62' for Healthcare). Omit for all industries.
    year: Data year (default 2021). Available: 2012-2021.
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
stateYes
naics_codeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses read behavior via 'Returns aggregate...' and identifies data types returned (establishment, employment, payroll). However, with no annotations provided, description carries full burden and omits details like idempotency, error conditions, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient structure: one-line summary, return description, use case, and structured Args block. No redundancy; examples add precision without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a 3-parameter data retrieval tool. Richly documents all inputs despite empty schema, and presence of output schema means description appropriately avoids detailing return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing rich semantics for all 3 parameters: format examples ('CA', 'TX'), industry code examples ('72' for Accommodation/Food), and valid ranges (2012-2021).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Get' and resource 'state-level business pattern summary'. Clearly distinguishes from siblings 'get_county_business_patterns' and 'compare_county_industries' by emphasizing 'state-level' and 'entire state' scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context ('Useful for state-level economic overviews in grant applications'), giving agents a concrete scenario. However, lacks explicit comparison to county-level siblings for when to prefer state vs county granularity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources