Skip to main content
Glama
lzinga

US Government Open Data MCP

open_payments_search

Search U.S. pharmaceutical and medical device company payments to doctors under the Sunshine Act. Find exact dollar amounts, payment types, doctor specialties, and associated drugs or devices.

Instructions

Search CMS Open Payments (Sunshine Act) data — payments from pharma/device companies to doctors. 15M+ records per year. Shows exact dollar amounts, payment type, doctor name/specialty, and which drugs/devices are involved. Cross-reference with FDA (drug safety), lobbying (company influence), and clinical trials.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
companyNoCompany name (partial match): 'Pfizer', 'Novo Nordisk', 'Johnson & Johnson'
doctorNoDoctor last name: 'Smith', 'Jones' (case-insensitive)
stateNoTwo-letter state: 'CA', 'TX', 'NY'
specialtyNoMedical specialty (partial match): 'Cardiology', 'Orthopedic', 'Psychiatry'
yearNoPayment year (auto-discovers latest if omitted). Available: 2018-2024+, new years added automatically when CMS publishes.
limitNoMax results (default 20)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It adds useful context such as the dataset size ('15M+ records per year'), cross-referencing capabilities, and auto-discovery of latest year, but does not cover critical behavioral aspects like rate limits, authentication needs, pagination, error handling, or response format. The description compensates partially but leaves gaps for a search tool with no structured annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: the first states the core purpose, the second details the data scope and key fields, and the third adds cross-referencing context. Every sentence adds value without redundancy, and it is front-loaded with the main functionality. No wasted words or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (search with 6 parameters, no output schema, no annotations), the description is moderately complete. It covers the dataset's nature, scale, and cross-referencing, but lacks details on output structure, error cases, or performance constraints. Without annotations or output schema, the description should do more to guide the agent on what to expect from results, but it meets a minimum viable level for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all six parameters thoroughly. The description does not add any parameter-specific semantics beyond what the schema provides (e.g., it mentions 'doctor name/specialty' and 'drugs/devices' but does not explain how they map to parameters like 'doctor' or 'specialty'). Baseline 3 is appropriate as the schema does the heavy lifting, and the description adds no extra parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search') and resources ('CMS Open Payments (Sunshine Act) data'), explicitly listing the data scope ('payments from pharma/device companies to doctors') and key fields returned ('exact dollar amounts, payment type, doctor name/specialty, and which drugs/devices are involved'). It distinguishes this tool from its siblings (e.g., open_payments_by_company, open_payments_by_physician) by emphasizing a general search across multiple parameters rather than aggregated views.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('Search CMS Open Payments (Sunshine Act) data') and hints at alternatives by mentioning cross-referencing with other datasets ('FDA (drug safety), lobbying (company influence), and clinical trials'), though it does not explicitly name when-not-to-use scenarios or direct alternatives among sibling tools. The guidance is sufficient for general usage but lacks explicit differentiation from similar tools like open_payments_by_physician.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server