Skip to main content
Glama

Server Details

County and tract-level health outcomes, behaviors, and preventive services from CDC PLACES

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
compare_countiesAInspect

Compare specific CDC PLACES measures across multiple counties.

Returns a side-by-side comparison of selected health measures for the
given counties. Useful for benchmarking one county against peers.

Args:
    fips_list: List of 5-digit county FIPS codes (e.g. ['53033', '53053', '06037']).
               Maximum 10 counties per request.
    measures: List of PLACES measure IDs to compare (e.g. ['DIABETES', 'OBESITY']).
              Maximum 10 measures per request.
ParametersJSON Schema
NameRequiredDescriptionDefault
measuresYes
fips_listYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the disclosure burden. It successfully specifies operational limits ('Maximum 10 counties', 'Maximum 10 measures') and return format ('side-by-side comparison'). However, it omits explicit safety declarations (read-only status) or auth requirements that would be necessary for a complete behavioral profile without annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient four-part structure: purpose statement, return value description, use case qualification, and detailed parameter specifications. No redundant text; examples are illustrative but concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple flat schema (2 array parameters) and existence of output schema, the description provides complete coverage. It identifies the data source (CDC PLACES), explains the comparative return value, documents parameter constraints, and specifies limits—sufficient for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Exemplary compensation for 0% schema description coverage. The Args section provides detailed semantics including format specifications ('5-digit county FIPS codes'), concrete examples (['53033', '53053']), value constraints (Maximum 10), and domain context ('PLACES measure IDs').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Compare') and resource ('CDC PLACES measures across multiple counties'). The terms 'side-by-side comparison' and 'benchmarking' clearly distinguish this from sibling tools like get_county_measures (likely single entity) and get_measure_by_state (different aggregation level).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context ('benchmarking one county against peers') indicating the comparative analytical purpose. However, it does not explicitly reference sibling tools as alternatives for non-comparative scenarios (e.g., 'use get_county_measures for single county data').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_county_measuresAInspect

Get all CDC PLACES health measures for a county.

Returns up to 36 measures including health outcomes (diabetes, obesity,
heart disease, etc.), health behaviors (smoking, binge drinking), preventive
services (checkups, screenings), and health status indicators.

Args:
    county_fips: 5-digit county FIPS code (e.g. '53033' for King County, WA).
                 Must be a string, not an integer.
    year: Optional release year to filter by. Omit for the most recent data.
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
county_fipsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It effectively discloses the data payload behavior: returns up to 36 measures with specific categories (health outcomes, behaviors, preventive services, status indicators). Missing operational details like rate limits or caching, but the data content disclosure is comprehensive for a read-only retrieval tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with purpose front-loaded, followed by return value summary, then Args section with clear formatting. Every sentence earns its place—no redundancy or generic filler. The brevity is appropriate given the parameter documentation is dense with examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage of the 2 parameters despite 0% schema coverage, and helpful summary of output schema contents. Output schema exists, so detailed return description is supplementary but welcome. Minor gap: lacks explicit sibling differentiation guidance and operational behavioral details (rate limits, authentication requirements).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, requiring description to fully compensate. The Args section excellently documents both parameters: county_fips includes format (5-digit), concrete example ('53033' for King County, WA), and type constraint clarification ('Must be a string, not an integer'); year explains optional behavior ('Omit for the most recent data').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specific verb ('Get') + specific resource ('CDC PLACES health measures') + scope ('for a county'). The county-level scope clearly distinguishes from siblings get_tract_measures (tract-level), get_measure_by_state (state-level), and compare_counties (comparison tool).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context by specifying 'county' granularity and detailing the 36 returned measure types, which helps agents select based on data needs. However, lacks explicit cross-references to siblings (e.g., 'for state-level data use get_measure_by_state') or explicit when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_measure_by_stateAInspect

Get one CDC PLACES measure for all counties in a state.

Useful for comparing a specific health metric (e.g. DIABETES, OBESITY)
across all counties within a state. Returns the most recent data year.

Args:
    state_abbr: Two-letter state abbreviation (e.g. 'WA', 'CA').
    measure_id: PLACES measure ID (e.g. 'DIABETES', 'OBESITY', 'BPHIGH').
                Use search_measures to find valid IDs.
ParametersJSON Schema
NameRequiredDescriptionDefault
measure_idYes
state_abbrYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden. It discloses temporal behavior ('Returns the most recent data year') which is valuable context. However, missing disclosure on idempotency/safety traits, rate limits, or data freshness guarantees that would be necessary for a complete behavioral profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded, followed by use case, return behavior, and parameter details. The 'Args:' block is necessary given schema deficiencies. No redundant sentences, though the docstring style slightly reduces readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter tool with output schema present. Parameter constraints and measure discovery path are documented. Only minor gap is lack of explicit mention of output format details, though these are covered by the output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Exceptional compensation for 0% schema coverage. Description provides format specifications ('Two-letter state abbreviation'), concrete examples ('WA', 'CA', 'DIABETES'), and discovery guidance ('Use search_measures') for both parameters, exceeding what the schema titles provide.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Get' + resource 'CDC PLACES measure' + scope 'all counties in a state'. The phrase 'all counties' effectively distinguishes this from siblings like get_county_measures (single county) and get_tract_measures (tract-level).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear use case ('comparing a specific health metric across all counties') and explicitly references sibling tool search_measures for discovering valid IDs. Could be strengthened by explicitly contrasting with get_county_measures for single-county queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tract_measuresAInspect

Get all CDC PLACES health measures for a census tract.

Returns tract-level estimates for health outcomes, behaviors, preventive
services, and health status indicators. Tract-level data uses small area
estimation and may have wider confidence intervals than county data.

Args:
    tract_fips: 11-digit census tract FIPS code (e.g. '53033005300').
                Must be a string, not an integer.
    year: Optional release year to filter by. Omit for the most recent data.
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
tract_fipsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully adds critical behavioral context about data quality (small area estimation methods and wider confidence intervals) that helps set expectations for the returned data, though it omits operational details like error handling for invalid FIPS codes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear logical flow: purpose statement, return value description, data quality caveat, then parameter specifications. The Args block is slightly verbose but necessary given the schema's lack of descriptions. No sentences waste space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema (per context signals) and only two parameters, the description is appropriately complete. It covers the purpose, data limitations, and parameter details adequately. A minor gap is the lack of brief context about what 'CDC PLACES' represents for users unfamiliar with the dataset.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage (only titles present), the description fully compensates by providing essential semantics for both parameters: the 11-digit format with example for tract_fips including the crucial type constraint ('Must be a string, not an integer'), and the behavior guidance for year ('Omit for the most recent data').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') and clearly identifies the resource ('CDC PLACES health measures') and scope ('census tract'). It effectively distinguishes from siblings like 'get_county_measures' and 'get_measure_by_state' by explicitly specifying tract-level granularity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit guidance by noting that 'Tract-level data...may have wider confidence intervals than county data,' which helps the agent determine when to prefer the county-level sibling tool. However, it stops short of explicitly naming the sibling alternative (get_county_measures) for that use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_measuresAInspect

Search available CDC PLACES measures by name or category.

Returns matching measure IDs, names, and categories. Use this to find
the correct measure_id for other tools.

Categories: Health Outcomes, Health Behaviors, Prevention, Health Status.

Args:
    keyword: Search term (e.g. 'diabetes', 'smoking', 'prevention', 'heart').
ParametersJSON Schema
NameRequiredDescriptionDefault
keywordYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses return contents (IDs, names, categories) and valid category values, but lacks critical behavioral details like exact vs. fuzzy matching, pagination, or empty-result behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured and front-loaded with purpose first, followed by return values, usage context, and parameter documentation. Every sentence adds value, though the 'Args:' section format slightly disrupts prose flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple single-parameter search tool. It compensates for missing parameter descriptions in the schema and acknowledges the existence of an output schema, though it could benefit from explaining search mechanics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates effectively by defining the parameter as a 'Search term' and providing concrete examples (e.g., 'diabetes', 'smoking'), though it omits details like minimum length or matching behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the specific action (search) and resource (CDC PLACES measures) and distinguishes from siblings by establishing this as a discovery tool to 'find the correct measure_id for other tools'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context on when to use the tool (to find measure IDs before using sibling retrieval tools), though it does not explicitly name the alternative tools or state exclusion conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources