Skip to main content
Glama
lzinga

US Government Open Data MCP

nih_projects_by_agency

Analyze NIH research funding distribution by institute for a fiscal year to understand budget allocation across disease areas like cancer and infectious diseases.

Instructions

Get project counts by NIH institute/center for a fiscal year. Shows which institutes fund the most research: NCI (cancer), NIAID (infectious diseases), etc. Useful for understanding NIH budget allocation across disease areas.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
fiscal_yearYesFiscal year: 2024
agenciesNoSpecific agency codes to check (default: top 25)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It implies a read-only operation ('Get project counts'), which is safe, but does not disclose behavioral traits such as rate limits, authentication needs, or data freshness. The description adds some context about the output ('Shows which institutes fund the most research') but lacks details on format, pagination, or error handling, leaving gaps for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core functionality, followed by explanatory and usage context. Every sentence earns its place without redundancy, making it efficient and easy to parse for an agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is adequate but has gaps. It covers purpose and parameter semantics well, but lacks details on output format, behavioral constraints, or error handling. Without annotations or an output schema, the agent must infer these aspects, making it minimally viable but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the purpose of the parameters: it clarifies that 'fiscal_year' is for filtering and 'agencies' allows checking specific ones with a default of 'top 25', and provides examples (e.g., 'NCI (cancer), NIAID (infectious diseases)') that give semantic meaning beyond the schema's enum list. This elevates the score above the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get'), resource ('project counts'), and scope ('by NIH institute/center for a fiscal year'). It distinguishes from siblings by focusing on NIH funding allocation, unlike other tools in the list that cover different agencies or data types (e.g., BEA, BLS, CDC tools).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('Useful for understanding NIH budget allocation across disease areas'), which helps an agent infer it's for analytical or reporting purposes. However, it does not explicitly state when not to use it or name alternative tools for similar data (e.g., 'nih_search_projects' or 'nih_spending_by_category' from the sibling list), missing explicit guidance on alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server