Skip to main content
Glama
lzinga

US Government Open Data MCP

calc_search_rates

Read-only

Search GSA CALC+ ceiling rates for federal labor categories to find awarded hourly rates on GSA MAS contracts. Useful for market research, IGCEs, and competitive pricing.

Instructions

Search GSA CALC+ ceiling rates for federal labor categories. Find awarded hourly rates on GSA MAS professional services contracts. Search by keyword (wildcard across labor category, vendor, contract), exact field match, or browse with filters. Useful for market research, IGCEs, and competitive pricing. Data refreshed daily.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
keywordNoWildcard keyword search across labor category, vendor name, and contract number (2 char min) - e.g. 'software engineer', 'Booz', 'GS10F'
searchNoExact field match as 'field:value' - e.g. 'labor_category:Engineer II', 'vendor_name:Deloitte', 'idv_piid:GS10F0303V'
education_levelNoEducation filter: 'HS', 'AA', 'BA', 'MA', 'PHD'. Use pipe for multiple: 'BA|MA'
experience_rangeNoExperience range as 'min,max' years - e.g. '3,10' or '5,20'
min_years_experienceNoExact minimum years - e.g. '5'
price_rangeNoHourly rate range as 'min,max' dollars - e.g. '50,150'
worksiteNoWorksite: 'Contractor', 'Customer', 'Both'
business_sizeNoBusiness size: 'S' (Small Business), 'O' (Other than Small Business)
security_clearanceNoSecurity clearance required: 'yes' or 'no'
sinNoGSA SIN (Special Item Number) - e.g. '541330ENG', '541620'
categoryNoService category - e.g. 'Professional Services', 'Facilities'
subcategoryNoService subcategory - e.g. 'IT Services', 'Engineering'
orderingNoSort field: 'labor_category', 'current_price', 'education_level', 'keywords', 'certifications', 'min_years_experience', 'vendor_name', 'schedule'. Default: current_price
sortNoSort direction (default: asc)
pageNoPage number (default 1)
page_sizeNoResults per page (default 20)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, which the description aligns with. The description adds 'Data refreshed daily' as a useful behavioral detail. However, it does not discuss rate limits, pagination behavior beyond default parameters, or maximum result size, which would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences, front-loaded with the main purpose, and includes use cases. It avoids unnecessary details, though it could be slightly more structured with explicit separation of search modes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 16 parameters and no output schema, the description provides a good overview of functionality, use cases, and data freshness. It implies pagination via parameters but does not explicitly describe the response format, which is acceptable since there is no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not significantly augment the parameter meanings beyond what is already in the schema; it reiterates keyword and exact search methods that are already documented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches GSA CALC+ ceiling rates for federal labor categories, with a specific verb and resource. It mentions use cases for market research and IGCEs. However, it does not differentiate from sibling tools like calc_contract_rates or calc_suggest, which could lead to confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description only states general use cases ('market research, IGCEs, competitive pricing') but does not compare with siblings or mention when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-gov-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server