Skip to main content
Glama

FCC Broadband Map

Server Details

Broadband availability, providers, speeds, and BEAD classification from the FCC

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
get_broadband_by_locationAInspect

Get broadband providers and availability at a specific lat/lon location.

Returns a list of broadband providers serving the location with their
advertised download/upload speeds and technology types. Includes BEAD
classification (unserved/underserved/served) based on max available speeds.

NOTE: The FCC Broadband Map API has bot protection and may reject requests.
If you get an error, the API endpoint may have changed. The FCC updates
this API frequently without notice.

Args:
    latitude: Location latitude (e.g. 38.8977 for Washington DC).
    longitude: Location longitude (e.g. -77.0365 for Washington DC).
    technology_code: Filter by technology (0=All, 10=Copper, 40=Cable,
                    50=Fiber, 60=Satellite, 70=Fixed Wireless).
    speed_download: Minimum download speed in Mbps (default 25).
    speed_upload: Minimum upload speed in Mbps (default 3).
    as_of_date: BDC filing date in YYYY-MM-DD format (default 2024-06-30).
ParametersJSON Schema
NameRequiredDescriptionDefault
latitudeYes
longitudeYes
as_of_dateNo2024-06-30
speed_uploadNo
speed_downloadNo
technology_codeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and excels: it warns about FCC API bot protection/rejection risks, notes frequent unannounced API changes, and explains return content including BEAD classification semantics (unserved/underserved/served) that go beyond the output schema presence.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose, distinct output description, warning block for API limitations, and bulleted Args section. Length is appropriate for the complexity (6 parameters, external API warnings) with no redundant sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists (noted in context signals), the description adequately summarizes return values (provider list, speeds, BEAD classification) while focusing descriptive effort on parameters and API behavior warnings necessary for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema coverage, the Args section comprehensively documents all 6 parameters with semantic meaning, examples (e.g., 38.8977 for Washington DC), and critical enum mappings for technology_code (0=All, 10=Copper, etc.) that the bare schema lacks entirely.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Get') + resource ('broadband providers') + scope ('specific lat/lon location'), clearly distinguishing it from siblings that operate on county/state aggregates or county-level provider lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the 'specific lat/lon location' phrasing implicitly contrasts with county/state summaries, the description lacks explicit guidance on when to choose this tool over 'get_broadband_summary_by_county' or alternatives. The FCC API warning provides useful operational context but not selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_broadband_summary_by_countyAInspect

Get broadband availability statistics for a county.

Returns aggregated broadband data including provider counts, technology
availability, and BEAD-relevant metrics (unserved/underserved/served
location percentages).

NOTE: County-level summary endpoints may not be directly available in all
API versions. This tool attempts multiple endpoint patterns.

Args:
    county_fips: 5-digit county FIPS code (e.g. '11001' for Washington DC,
                 '53033' for King County WA). Always a string, never an integer.
    speed_download: Minimum download speed threshold in Mbps (default 25).
    speed_upload: Minimum upload speed threshold in Mbps (default 3).
    as_of_date: BDC filing date in YYYY-MM-DD format (default 2024-06-30).
ParametersJSON Schema
NameRequiredDescriptionDefault
as_of_dateNo2024-06-30
county_fipsYes
speed_uploadNo
speed_downloadNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. Adds valuable behavioral context about 'attempts multiple endpoint patterns' (resilience logic) and describes return content (aggregated data, BEAD metrics). However, lacks explicit safety classification (read-only vs destructive), rate limits, or auth requirements that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose statement, followed by return value description, behavioral note, and parameter documentation. Every sentence earns its place; the endpoint pattern warning and Args specifications are necessary given schema deficiencies. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with 0% schema coverage, the description successfully compensates with the Args block. Since output schema exists, brief summary of return values is sufficient. Minor gap: lacks error handling documentation or data freshness guarantees beyond the default date example.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage. Args block provides rich semantics: county_fips includes format (5-digit), examples ('11001', '53033'), and type constraints ('Always a string, never an integer'). Date parameter specifies domain meaning ('BDC filing date') and format. Speed parameters clarify they are thresholds with units (Mbps) and defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb-resource combination ('Get broadband availability statistics for a county') with specific scope. Mentions aggregated data and BEAD metrics which distinguishes from provider-list siblings. Lacks explicit comparative guidance against get_broadband_summary_by_state or get_providers_by_county to earn a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Contains implied usage through scope ('for a county') but lacks explicit 'when to use vs alternatives' guidance. The API version caveat ('may not be directly available in all API versions') provides implementation context but not comparative usage guidance against sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_broadband_summary_by_stateAInspect

Get state-level broadband availability summary.

Returns aggregated broadband statistics for the state including provider
counts and technology deployment. Useful for BEAD program analysis to
identify states with significant unserved/underserved populations.

Args:
    state_fips: 2-digit state FIPS code (e.g. '53' for Washington, '11' for DC).
                Always a string, never an integer.
    speed_download: Minimum download speed threshold in Mbps (default 25).
    speed_upload: Minimum upload speed threshold in Mbps (default 3).
    as_of_date: BDC filing date in YYYY-MM-DD format (default 2024-06-30).
ParametersJSON Schema
NameRequiredDescriptionDefault
as_of_dateNo2024-06-30
state_fipsYes
speed_uploadNo
speed_downloadNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden and successfully discloses return structure ('aggregated broadband statistics', 'provider counts', 'technology deployment') and domain context (BEAD program). Lacks auth/rate limit details but adequately describes the read-only aggregation behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured and front-loaded: purpose (sentence 1), return values (sentence 2), usage context (sentence 3), followed by structured Args block. No wasted text; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete given constraints: output schema exists so return value description can be brief; description compensates for 0% schema coverage with detailed Args block; includes domain-specific context (BEAD program, BDC filing dates) necessary for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (properties lack descriptions), but the Args section fully compensates by documenting all 4 parameters with rich semantics: units (Mbps), formats (2-digit FIPS, YYYY-MM-DD), constraints ('Always a string, never an integer'), and default values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') + resource ('state-level broadband availability summary') and explicitly distinguishes geographic scope from siblings (county-level and location-specific alternatives) by specifying 'state-level' in first sentence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context ('Useful for BEAD program analysis to identify states with significant unserved/underserved populations') establishing when to invoke. Does not explicitly name sibling alternatives, but geographic scope is self-evident from description and tool name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_providers_by_countyAInspect

Get list of broadband providers serving a county.

Returns provider names, technology types, and speed tiers available
in the specified county. Useful for BEAD applications to identify
which providers serve an area and what technologies they deploy.

Args:
    county_fips: 5-digit county FIPS code (e.g. '11001' for Washington DC,
                 '53033' for King County WA). Always a string, never an integer.
    technology_code: Filter by technology (0=All, 10=Copper, 40=Cable,
                    50=Fiber, 60=Satellite, 70=Fixed Wireless).
    speed_download: Minimum download speed threshold in Mbps (default 25).
    speed_upload: Minimum upload speed threshold in Mbps (default 3).
    as_of_date: BDC filing date in YYYY-MM-DD format (default 2024-06-30).
ParametersJSON Schema
NameRequiredDescriptionDefault
as_of_dateNo2024-06-30
county_fipsYes
speed_uploadNo
speed_downloadNo
technology_codeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains what the tool returns ('provider names, technology types, and speed tiers') and the domain context (BEAD applications). While it doesn't explicitly label itself as 'safe' or 'read-only', the verb 'Get' combined with the return description makes the read-only nature clear. Lacks explicit error behavior documentation (e.g., invalid FIPS codes).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: purpose first, then return values, then use case, then parameters. The Args block, while lengthy, is necessary given zero schema coverage and efficiently documents all parameters. Only minor verbosity in the BEAD application sentence which could be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters with 0% schema coverage and an existing output schema, the description is remarkably complete. It compensates fully for the undocumented schema by detailing every parameter. It provides essential domain context (BEAD applications, FIPS codes, technology types) that isn't visible in the structured data. Since output schema exists, the brief mention of return values is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, so the description must fully compensate via the Args block. It excellently documents all 5 parameters: county_fips includes format (5-digit), examples (11001, 53033), and type constraints (string never integer); technology_code maps integer codes to semantic meanings (10=Copper, 40=Cable, etc.); speed thresholds include units (Mbps) and defaults; as_of_date includes format (YYYY-MM-DD) and default.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The opening sentence 'Get list of broadband providers serving a county' provides a specific verb (Get), resource (broadband providers), and scope (county). It clearly distinguishes from siblings: this returns provider-level details rather than summaries (get_broadband_summary_by_county/state) and filters by county rather than specific location (get_broadband_by_location).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context ('Useful for BEAD applications to identify which providers serve an area'), which helps users understand the domain context. However, it lacks explicit guidance on when to use this tool versus the summary or location-based alternatives, or when to prefer those over this one.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources