Server Details
Environmental justice screening indicators, pollution burden, and demographic vulnerability
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
See and control every tool call
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsget_demographic_indicatorsAInspect
Get demographic vulnerability indicators by block group.
Returns demographic data including minority percentage, low income
percentage, linguistic isolation, education levels, and age
distributions. These are the demographic components used in
EJScreen's EJ index calculations.
Args:
state: Two-letter US state abbreviation (e.g. 'WA', 'CA').
county_fips: Optional county FIPS code (3-digit or 5-digit).
If omitted, returns state-level results sorted by
highest demographic index.
limit: Maximum number of block groups to return (default 50, max 500).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| state | Yes | ||
| county_fips | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses sorting behavior (sorted by highest demographic index when county omitted), result limits (default 50, max 500), and specific data fields returned, compensating for missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient Args-style format with no fluff; purpose statement front-loaded, though slightly technical structure could be more conversational.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for simple 3-parameter tool; lists return value categories (minority percentage, income levels) even though output schema exists, and clarifies result ordering behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage by detailing all parameters: state format (two-letter), county_fips format (3/5-digit) and optionality, limit constraints and defaults.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb-resource pair ('Get demographic vulnerability indicators') and distinguishes from siblings by specifying block-group granularity and EJScreen demographic components vs environmental data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through EJScreen context and block-group specificity but lacks explicit guidance on when to choose this over summary tools (get_ej_state_summary, get_ej_county_summary) or location-based queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ej_county_summaryAInspect
Get EJ indicators for census block groups in a county.
Returns EJScreen environmental justice data for block groups within
the specified county, sorted by highest overall environmental burden.
Includes environmental indicators, demographic data, and EJ indexes.
Args:
state: Two-letter US state abbreviation (e.g. 'WA', 'CA').
county_fips: County FIPS code, either 5-digit full (e.g. '53033')
or 3-digit county portion (e.g. '033'). If 3 digits,
state FIPS is prepended automatically.
limit: Maximum number of block groups to return (default 50, max 500).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| state | Yes | ||
| county_fips | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses critical behavioral details absent from annotations: results are sorted by highest environmental burden, state FIPS is auto-prepended to 3-digit county codes, and specific data categories (environmental, demographic, EJ indexes) are returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured docstring format with summary line followed by Args section; appropriately concise with no redundant sentences, though slightly utilitarian.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete given the output schema exists; covers return value categories, sorting behavior, and all input parameters without needing to detail the full output structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Completely compensates for 0% schema description coverage by providing detailed Args section with formats, examples ('WA', '53033'), and behavior notes (auto-prepending) for all three parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves EJ indicators for census block groups within a specific county, distinguishing scope from state-level and location-specific siblings, though explicit differentiation is absent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to select this tool over siblings like get_ej_state_summary or get_ej_data_by_location; lacks explicit when/why recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ej_data_by_locationAInspect
Get EJScreen environmental justice data for a specific location.
Uses the EPA EJScreen REST broker to retrieve EJ screening indicators
for a point location with a buffer distance. Returns environmental
indicators, demographic data, and EJ indexes.
Args:
latitude: Latitude of the location (e.g. 47.61).
longitude: Longitude of the location (e.g. -122.33).
distance: Buffer distance in miles around the point (default 1.0).| Name | Required | Description | Default |
|---|---|---|---|
| distance | No | ||
| latitude | Yes | ||
| longitude | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses data source (EPA EJScreen REST broker) and return payload categories, but omits auth requirements, rate limits, or error behaviors (annotations absent so description carries full burden).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfectly front-loaded with high information density; no redundant sentences, appropriate length for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficiently complete given the existence of an output schema; mentions high-level return categories without over-specifying, though could note coordinate system (WGS84 assumed).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage by providing clear semantics and concrete examples (47.61, -122.33) and units (miles) for all parameters in the Args block.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific action (get EJScreen data) and resource (environmental justice indicators for a point location), implicitly distinguishing it from county/state summary siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context (point location with buffer) but lacks explicit when-to-use guidance versus county/state alternatives or the sibling indicator tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ej_state_summaryAInspect
Get a state-level overview of EJ indicators across block groups.
Returns the most environmentally burdened block groups in the state,
sorted by PM2.5 percentile. Useful for identifying areas with the
highest environmental justice concerns.
Args:
state: Two-letter US state abbreviation (e.g. 'WA', 'CA').
limit: Maximum number of block groups to return (default 100, max 500).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| state | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses critical behavioral details absent from annotations/schema: results are sorted by PM2.5 percentile and filtered to 'most burdened' block groups.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose first, then behavior, use case, and Args section; no redundant sentences, though Args formatting is slightly informal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Sufficient for tool complexity; acknowledges output schema exists by describing key return characteristic (PM2.5 sorting) without over-specifying.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema coverage: provides format pattern (two-letter), examples ('WA', 'CA'), and constraints (default 100, max 500) for both parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states state-level scope and PM2.5 sorting, implicitly distinguishing from county/location siblings, though could explicitly differentiate from get_ej_county_summary.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides use case ('identifying areas with highest EJ concerns') but lacks explicit when/when-not guidance relative to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_environmental_indicatorsAInspect
Get specific environmental indicators by block group.
Returns environmental indicator values and state percentiles for
block groups. Can filter to a specific indicator type or return all.
Args:
state: Two-letter US state abbreviation (e.g. 'WA', 'CA').
county_fips: Optional county FIPS code (3-digit or 5-digit).
If omitted, returns state-level results.
indicator: Optional specific indicator to focus on. Options:
'pm25', 'ozone', 'diesel', 'cancer', 'respiratory',
'traffic', 'lead', 'superfund', 'hazwaste',
'wastewater', 'rmp', 'ust'. If omitted, returns all.
limit: Maximum number of block groups to return (default 50, max 500).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| state | Yes | ||
| indicator | No | ||
| county_fips | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Given no annotations, description effectively discloses return behavior (values and percentiles), default limits (50/500), and conditional behavior when optional parameters are omitted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Uses clear docstring structure with Args section; slightly verbose but every sentence provides necessary detail for AI agent invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given output schema exists; covers all required inputs and optional filters without needing to detail return structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description comprehensively compensates by documenting all four parameters with formats, examples, valid options, and default behaviors.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves environmental indicators by block group with specific return values, though it doesn't explicitly contrast with sibling EJ summary tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage guidance through parameter descriptions (when to use filters vs. omit them) but lacks explicit when/when-not guidance relative to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!