Skip to main content
Glama

Server Details

School enrollment, graduation rates, demographics, finance, and Title I data

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
get_district_financeAInspect

Get district-level financial data: total revenue, expenditures, per-pupil spending, federal/state/local revenue breakdown.

Returns fiscal data from the CCD School District Finance Survey (F-33),
including revenue sources, expenditure categories, and per-pupil spending.

Args:
    state: Two-letter US state abbreviation (e.g. 'CA', 'NY').
    county_fips: Optional 5-digit county FIPS code to filter by county.
    year: Fiscal year to query (default 2021). Finance data lags 1-2 years.
    limit: Maximum number of districts to return (default 50, max 500).
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
limitNo
stateYes
county_fipsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations exist, description carries full burden and adds crucial behavioral context about data lag (1-2 years) and data source (CCD F-33 survey) not present in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with clear summary sentence, followed by data source context, then structured Args documentation; every sentence provides value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers tool purpose and parameters given the existence of an output schema; appropriately omits detailed return value specifications while noting the critical data lag limitation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Fully compensates for 0% schema description coverage by providing detailed Args section with examples (e.g., 'CA', 'NY'), format specifications (5-digit FIPS), and constraints (max 500).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get') + specific resource ('district-level financial data') with detailed breakdowns (revenue, expenditures, per-pupil spending), clearly distinguishing from sibling tools focused on demographics, graduation rates, or general overviews.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context about data freshness ('Finance data lags 1-2 years') and defaults, though lacks explicit comparison to alternatives like get_district_overview.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_district_overviewAInspect

Get district directory overview: district name, student count, school count, teachers, locale type.

Returns district-level summary information from the CCD District Directory,
including total enrollment, number of schools, teacher FTE counts,
and urban/suburban/rural locale classification.

Args:
    state: Two-letter US state abbreviation (e.g. 'CA', 'NY').
    year: School year to query (default 2022).
    limit: Maximum number of districts to return (default 50, max 500).
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
limitNo
stateYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Lacking annotations, description carries burden by specifying parameter defaults and max limit (500), but omits auth requirements, rate limits, or idempotency details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with summary line, detailed paragraph, and Args section; slightly redundant between first line and second paragraph but appropriately concise overall.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a simple three-parameter tool; mentions data source (CCD) and key return fields, sufficient given output schema exists to define return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates effectively for 0% schema description coverage by defining all three parameters in Args section, including format hints (two-letter state abbreviation) and constraint details (max 500).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves district directory data (name, enrollment, school count) from CCD District Directory, implicitly distinguishing from finance and school-level siblings via resource type and data fields.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through data description (directory overview data) but lacks explicit when-to-use guidance or comparisons to sibling tools like get_district_finance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_graduation_ratesAInspect

Get 4-year adjusted cohort graduation rates by school.

Returns graduation rate data from EdFacts, including cohort counts
and midpoint graduation rates. Filterable by state or county.

Args:
    state: Two-letter US state abbreviation (e.g. 'CA', 'NY').
    county_fips: Optional 5-digit county FIPS code to filter by county.
    year: School year to query (default 2021). Graduation data may lag.
    limit: Maximum number of schools to return (default 50, max 500).
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
limitNo
stateYes
county_fipsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses critical behavioral trait not in schema: 'Graduation data may lag'; also identifies EdFacts as source, fulfilling burden since no annotations provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded; Args section is necessary and appropriately detailed given schema deficiencies, though slightly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately complete given output schema exists: covers data provenance, lag warnings, and filtering capabilities without redundantly detailing return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage: provides formats (two-letter state, 5-digit FIPS), examples ('CA', 'NY'), and constraints (max 500) for all parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get') and resource ('4-year adjusted cohort graduation rates by school') clearly distinguishes from sibling tools handling finance, demographics, and general school data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clear context about data source (EdFacts) and content (cohort counts, midpoint rates), though lacks explicit 'when not to use' guidance versus demographic or finance alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_school_demographicsAInspect

Get school enrollment broken down by race/ethnicity.

Returns enrollment counts by demographic group from the CCD,
including White, Black, Hispanic, Asian, Native American,
Pacific Islander, Two or More Races, and total enrollment.

Args:
    state: Two-letter US state abbreviation (e.g. 'CA', 'NY').
    county_fips: Optional 5-digit county FIPS code to filter by county.
    year: School year to query (default 2022).
    limit: Maximum number of schools to return (default 50, max 500).
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
limitNo
stateYes
county_fipsNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description effectively discloses return content (specific demographic breakdowns including total enrollment) and data provenance (CCD).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded followed by return value details and parameter specifications; listing all race categories is slightly verbose but informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers the simple 4-parameter schema by explaining data source and return contents; appropriate given the existence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the Args section comprehensively explains all 4 parameters with formats (two-letter state, 5-digit FIPS), constraints (max 500), and defaults (2022, 50).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves school enrollment by race/ethnicity and specifies the data source (CCD), though it doesn't explicitly differentiate from sibling tool get_schools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to select this tool versus alternatives like get_schools or get_district_overview; only describes what data it returns.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_schoolsAInspect

Search schools with enrollment, Title I status, type (charter/magnet/regular), and teacher counts.

Returns school directory information from the Common Core of Data (CCD),
including school name, location, enrollment, charter/magnet status,
Title I participation, and full-time equivalent teacher counts.

Args:
    state: Two-letter US state abbreviation (e.g. 'CA', 'NY').
    county_fips: Optional 5-digit county FIPS code to filter by county.
    title_i_only: If True, return only Title I eligible schools.
    year: School year to query (default 2022).
    limit: Maximum number of schools to return (default 50, max 500).
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
limitNo
stateYes
county_fipsNo
title_i_onlyNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses data source (CCD) and pagination constraints (max 500), but lacks information on error handling, rate limits, or search behavior when no matches found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with summary sentence, data source context, and dedicated Args section; minor redundancy between first sentence and second paragraph listing overlapping fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage of input parameters and brief mention of return fields is appropriate given existence of output schema; adequately covers the 5-parameter complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema description coverage by detailing all 5 parameters in Args section with formats, examples, and constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it searches schools with specific filters (enrollment, Title I, charter/magnet status) and distinguishes from district-level siblings, though could better differentiate from get_school_demographics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use versus alternatives like get_school_demographics or get_district_overview; only describes what data is returned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Verify Ownership

Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:

{
  "$schema": "https://glama.ai/mcp/schemas/connector.json",
  "maintainers": [
    {
      "email": "your-email@example.com"
    }
  ]
}

The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.

Last verification attempt failed.

getaddrinfo ENOTFOUND mcp.olyport.com

Sign in to verify ownership

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.