Skip to main content
Glama

Server Details

GovInfo.gov MCP — full text of US government publications

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-govinfo
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 15 of 16 tools scored. Lowest: 2.8/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but ask_pipeworx, validate_claim, and compare_entities overlap in answering questions about companies. However, descriptions clarify the intended use (general Q&A vs. fact-checking vs. comparison), so only minor confusion is possible.

Naming Consistency3/5

Names mix styles: verb_noun (compare_entities, discover_tools), verb alone (forget), noun_noun (entity_profile, pipeworx_feedback), and get_/list_ prefixes. No single pattern dominates, but most are readable.

Tool Count5/5

16 tools cover a broad government information domain (SEC, FDA, Census, etc.) with a natural scope—querying packages/granules, searching, resolving entities, comparing, profiling, fact-checking, and memory management. No tool feels extraneous.

Completeness5/5

The set covers the full lifecycle of accessing government data: discovery (list_collections, search_packages, get_package, get_granule, list_granules), entity resolution (resolve_entity), profiling (entity_profile), comparison (compare_entities), fact-checking (validate_claim), updates (recent_changes), memory (remember/recall/forget), and feedback. No obvious gaps.

Available Tools

16 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description mentions routing across multiple sources and returning a result, which is useful. However, it does not disclose potential errors, rate limits, or the nature of the 'result' (e.g., structured data vs. text).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is fairly concise with clear sentences. The first sentence states the core purpose, followed by usage guidance and examples. Minor improvement would be to separate examples into a list for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so the description should explain what the result looks like. It only says 'returns the result' without details on format. Also lacks error handling or limitations, which is important given the broad scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter 'question' with schema description 'Your question or request in natural language'. The description adds value by providing examples and explaining the routing behavior, which goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers natural-language questions by automatically picking the right data source, with specific examples like 'What is the US trade deficit with China?'. It distinguishes from siblings by mentioning routing across 300+ sources, which is unique.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use when a user asks... and you don't want to figure out which Pipeworx pack/tool to call' and provides examples. Does not explicitly state when not to use, but the guidance is strong.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully carries the transparency burden. It details data sources (SEC EDGAR/XBRL for companies, FAERS/FDA/clinicaltrials.gov for drugs) and output format (paired data + citation URIs). It does not cover error behavior or latency but sufficiently describes the tool's non-destructive, read-only nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-organized paragraph that front-loads the primary purpose, then provides usage triggers, per-type details, output format, and value proposition. Every sentence serves a purpose without repetition or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters with enums) and no output schema, the description adequately covers input constraints (2–5 items, specific values per type) and output structure (paired data fields per type, citation URIs). However, it omits error handling or behavior on invalid inputs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with basic descriptions. The description adds significant meaning by explaining what data each type retrieves (revenue/net income/cash/debt for companies; adverse events/approvals/trials for drugs). This is a clear improvement over the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Compare 2–5 companies (or drugs) side by side' with specific verb and resource. It provides example user queries ('compare X and Y', 'X vs Y') and distinguishes itself from related tools like entity_profile by focusing on multi-entity comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives explicit trigger phrases and use cases ('when a user says...'), and emphasizes efficiency ('Replaces 8–15 sequential calls'). However, it does not explicitly mention when not to use this tool or name alternatives like entity_profile for single-entity queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states the tool returns tool names and descriptions, implying it is a read-only, non-destructive operation. While it could detail how relevance is determined, the information given is sufficient for safe use. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences. The first sentence states purpose, the second lists domains, the third gives usage guidance. It is front-loaded and efficient, though the list of domains is somewhat lengthy. Overall, each sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains the return value: 'top-N most relevant tools with names + descriptions.' It also covers default and max limit. The description fully addresses what the tool does, what it returns, and when to call it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both 'query' and 'limit' described in the schema. The description adds value by providing example queries (e.g., 'analyze housing market trends'), which clarifies the expected input format beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find tools by describing the data or task.' It lists numerous specific domains (SEC filings, FDA drugs, etc.) and explicitly states the output: 'Returns the top-N most relevant tools with names + descriptions.' This distinguishes it from sibling tools that perform specific operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'Call this FIRST when you have many tools available and want to see the option set (not just one answer).' It also implies usage when browsing or discovering tools, which effectively directs the agent on when to use this tool over others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses returned data types (SEC filings, fundamentals, patents, news, LEI) and mentions citation URIs. Lacks details on error handling or unknown inputs, but overall transparent about its read-only, compound nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single paragraph front-loaded with purpose, every sentence adds value. No fluff, includes use cases, input formats, and output summary efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple data sources), description covers inputs, outputs, and usage context well. Missing details on pagination or depth of data, but adequate for a profile tool. No output schema, but return contents are outlined.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% with descriptions; description adds context: type enum explanation and value formats (ticker vs CIK with examples). Also clarifies that names are not supported, adding meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it 'Get everything about a company in one call' with specific verbs and resources. It distinguishes from siblings by naming multiple data sources (SEC, USPTO, GLEIF, news) and positioning it as a replacement for 10+ tool calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when-to-use examples ('tell me about X', 'give me a profile') and specifies input constraints (ticker or CIK, not names). Directs users to resolve_entity if only a name is available, clearly differentiating from sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It states the tool deletes a memory but does not mention irreversibility, required permissions, or rate limits. Adequate but could be more transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no wasted words: first states purpose, second gives usage guidance, third indicates pairing with sibling tools. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one required parameter, no output schema), the description covers purpose, usage, and relationships. Missing a note on irreversibility or persistence, but overall complete for a basic deletion tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds no additional meaning beyond what the schema already provides for the 'key' parameter. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete'), the resource ('a previously stored memory by key'), and distinguishes from siblings by mentioning pairing with 'remember' and 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the tool: when context is stale, task is done, or to clear sensitive data. Also hints at alternatives by naming related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_granuleCInspect

Granule metadata for a sub-unit within a package.

ParametersJSON Schema
NameRequiredDescriptionDefault
granule_idYesGranule ID
package_idYesGovInfo packageId
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description is the sole source for behavioral traits. It only states it retrieves metadata, but does not mention side effects, safety (read-only), return structure, or any constraints. Minimal transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise but lacks any structuring like front-loading key information. It earns its place but provides no additional context beyond the name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 required params, no output schema), the description is too brief. It does not mention what the returned metadata includes or how it relates to other package/granule tools. Incomplete for practical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents both parameters. The description adds no additional meaning beyond the schema, but the schema itself is sufficient. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it retrieves granule metadata for a sub-unit within a package, clearly indicating the verb and resource. It distinguishes from siblings like list_granules (which lists) and get_package (which gets package-level info), though not explicitly. The purpose is specific and clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like list_granules or get_package. The description lacks any usage context or preconditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_packageAInspect

Metadata for a single package by packageId (e.g., "BILLS-118hr1234ih", "FR-2024-05-12"). Returns title, dates, citations, granule count, download links (PDF/XML/MODS).

ParametersJSON Schema
NameRequiredDescriptionDefault
package_idYesGovInfo packageId
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries behavioral burden. It implies read-only operation by stating 'Returns' data, but doesn't explicitly confirm no side effects, auth needs, or rate limits. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with primary function and an example. No superfluous words; every phrase adds value (purpose, input format, output contents).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description lists key return fields (title, dates, citations, granule count, download links). Missing details on error handling for invalid package IDs, but overall sufficient for expected usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% but only states 'GovInfo packageId'. Tool description adds valuable examples of valid packageId patterns (e.g., 'BILLS-118hr1234ih'), which helps the agent construct correct inputs beyond the schema's minimal description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves metadata for a single package by packageId, with examples and a list of returned fields (title, dates, etc.). Distinguishes from sibling tools like get_granule and search_packages by specifying 'single package' vs granules or search results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes usage by packageId with concrete examples, making it clear when to use this tool. However, does not explicitly state when NOT to use it or mention alternatives like search_packages for broad queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_collectionsAInspect

List GovInfo collections (BILLS, CFR, USCODE, FR, CHRG, CRPT, HMAN, PLAW, SERIALSET, USCOURTS, etc.) with package counts. Use the collection code with search_packages to filter.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It indicates a read-like operation (listing) and mentions 'package counts', implying non-destructive behavior. However, it does not detail any side effects, authentication requirements, or rate limits, but the tool is simple and has no parameters, so the description suffices.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states the core purpose, second gives a usage tip. No extraneous text; every sentence adds value. The description is front-loaded and compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters, no output schema, and a straightforward listing operation, the description covers everything needed: what the tool returns (collections with counts) and how to use the result with a sibling tool. It is complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has no parameters (0 required, 0 optional), and schema description coverage is 100% (trivially). The description adds value by explaining the output includes collection codes and package counts, and provides examples of collection codes, which is meaningful context beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states 'List GovInfo collections (BILLS, CFR, USCODE, FR, CHRG, CRPT, HMAN, PLAW, SERIALSET, USCOURTS, etc.) with package counts.' It uses a specific verb 'list' and identifies the resource 'GovInfo collections', clearly distinguishing it from sibling tools like search_packages.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description includes direct guidance: 'Use the collection code with search_packages to filter.' This tells the agent when to use this tool (to list all collections) and how to combine it with a sibling tool for filtering, though it does not explicitly state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_granulesAInspect

List granules within a package — e.g., sections of a CFR title, individual entries in a Federal Register issue. Returns granule IDs + titles.

ParametersJSON Schema
NameRequiredDescriptionDefault
page_sizeNo1-100 (default 100)
package_idYesGovInfo packageId
offset_markNoPagination cursor
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description does not disclose behavioral traits such as pagination behavior, side effects, or required permissions. It only states that it lists granules and returns IDs+titles, leaving gaps for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences, front-loaded with purpose and examples. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description minimally states return values (granule IDs + titles) but does not explain pagination or other behavioral details. Adequate but not complete for a tool with 3 params and no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for parameters, so the baseline is 3. The description adds no extra meaning beyond what the schema provides; it does not explain pagination or package_id format beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists granules within a package with specific examples (sections of a CFR title, Federal Register issue) and states the return value (granule IDs + titles). It distinguishes itself from siblings like get_granule.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (to list granules in a package) but does not explicitly state when not to use or compare to alternatives like get_granule. No exclusion criteria or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses key behavioral traits: rate-limited to 5 per identifier per day, free, doesn't count against tool-call quota, and team reads digests daily. This provides adequate transparency for a safe, non-destructive feedback tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (about 6 sentences) and front-loaded with the purpose. Every sentence contributes meaningful information without redundancy, making it efficient for an agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple feedback submission tool with no output schema, the description is complete: it covers purpose, usage, rate limits, and expected processing (digests). No critical information is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds value beyond the schema by clarifying how to use the 'type' enum (e.g., bug = something broke) and providing context on the 'context' object. It also includes a best practice for writing the message.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to submit feedback (bug, feature, data_gap, praise) to the Pipeworx team. It distinguishes itself from sibling tools by being a feedback mechanism rather than a data retrieval or action tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: use for bugs, feature requests, data gaps, or praise. It also instructs to describe the issue in terms of Pipeworx tools/packs and not to paste end-user prompts, giving clear when-to-use and how-to-use context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full load. It discloses scoping by identifier and behavior (retrieve vs list). Lacks details on error handling (e.g., missing key) but overall transparent for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with core purpose, no wasted words. Use case and pairing are efficiently integrated.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description explains return type (value or list of keys). Could specify behavior on missing key or error, but adequately complete for a simple retrieval tool given sibling context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear parameter description. The description adds context on what keys represent (e.g., target ticker, address) and scoping, enhancing understanding beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a value or lists all keys, with specific verb-resource pairing (retrieve/list, saved keys/values). It distinguishes from siblings by naming remember and forget as complementary tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use (look up stored context) and how (omit key to list). Names alternatives (remember for save, forget for delete) and gives concrete use-case examples (ticker, address, notes).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must fully disclose behavior. It explains the tool fans out to multiple sources, describes the since parameter format, and mentions return fields. However, it omits details on latency, error handling, or data freshness, which are important for an unannotated tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single focused paragraph with purpose front-loaded and no extraneous content. Every sentence provides value, and the structure is clear and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without output schema, description explains return format (structured changes, count, citation URIs). Given tool complexity (multiple sources), it covers key aspects but could include error behavior or pagination details for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters, and description adds context: type is limited to 'company', since formats and recommended default, and value examples. This enhances understanding beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool's purpose: retrieving recent changes for a company. It provides specific example queries and lists the data sources (SEC EDGAR, GDELT, USPTO), distinguishing it from sibling tools like entity_profile or compare_entities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use with example user queries and includes a default window recommendation ('30d' or '1m' for typical monitoring). Lacks explicit exclusion criteria or direct comparison to siblings but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses scoping by identifier, persistence differences between authenticated users (persistent) and anonymous (24 hours), and key-value pair storage. Lacks details on size limits or number of memories, but otherwise comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, front-loaded with purpose, no redundant information. Each sentence adds value: purpose, when to use, and behavioral details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple nature of the tool (store key-value with persistence scoping), the description covers all essential aspects: what it stores, why, and how long. No output schema needed as return is implicit success.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for both key and value. Description adds more meaning with example key formats and specifying value can be any text, reinforcing the schema's intent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool saves data for reuse across conversations and sessions, using specific verb 'save' and resource 'data'. It distinguishes from siblings like recall and forget by mentioning pairing with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use: 'when you discover something worth carrying forward', gives examples (resolved ticker, address, preference), and mentions alternatives: 'Pair with recall to retrieve later, forget to delete'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that the tool returns IDs plus pipeworx:// URIs. However, it does not discuss error handling, rate limits, or behavior when input is ambiguous, which would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph of about 4 sentences, front-loaded with the primary purpose. It is relatively concise but contains examples and usage hints. Minor improvement could be breaking into shorter sentences or bullet points, but it is well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 2-parameter tool with no output schema, the description is quite complete. It covers purpose, usage context, input formats, and return types. It could detail the return format more, but overall it provides sufficient context for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds valuable context beyond schema descriptions: it explains acceptable formats for 'value' (ticker, CIK, name for company; brand/generic for drug) and provides illustrative examples, enhancing parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it looks up canonical/official identifiers for companies or drugs, specifying ID types (CIK, ticker, RxCUI, LEI) and providing concrete examples. It distinguishes itself by noting it replaces 2-3 lookup calls and should be used before other tools that need identifiers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use ('when a user mentions a name and you need the CIK...') and advises using it before other tools. It does not explicitly mention when not to use, but the guidance is clear and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_packagesBInspect

Full-text + faceted search across GovInfo. Filter by collection codes (comma-separated), date range, congress, court (for USCOURTS), and free-text query. Returns package IDs and titles.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesFree-text search
date_toNoYYYY-MM-DD
congressNoCongress number (e.g., 118) — for BILLS/CHRG/CRPT
date_fromNoYYYY-MM-DD
page_sizeNo1-100 (default 25)
collectionsNoComma-separated collection codes (e.g., "BILLS,FR")
offset_markNoPagination cursor from previous response
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of behavioral disclosure. It omits critical information such as pagination behavior, rate limits, read-only nature, and error handling. The statement 'Returns package IDs and titles' is too brief.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently lists filters and output, with no redundancy. However, it could be better organized for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters and no output schema, the description should explain pagination (offset_mark), default page_size, and response format in more detail. It only mentions returning package IDs and titles, leaving significant gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the description adds minimal additional meaning beyond the schema (e.g., noting that congress applies to BILLS/CHRG/CRPT). Baseline score is appropriate given high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Full-text + faceted search across GovInfo' and lists specific filters, making its purpose clear and distinct from sibling tools like get_package or list_collections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists available filters but does not provide explicit guidance on when to use this tool versus alternatives, nor does it mention when not to use it. Usage context is implied but not directly addressed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return types (verdict, extracted form, actual value with citation, percent delta) and notes it replaces 4-6 sequential calls. It does not mention side effects, but none are expected for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is four sentences, front-loaded with the core action, and every sentence provides necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one parameter and no output schema, the description fully covers purpose, usage, return values, and limitations. No additional information is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description of the 'claim' parameter. The description adds value by providing examples and clarifying the scope of claims, which goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (fact-check, verify, confirm/refute) and the resource (natural-language factual claim). It uses specific verbs and distinguishes from sibling tools, which include general tools like compare_entities and entity_profile but none dedicated to fact-checking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use when an agent needs to check whether something a user said is true' and provides example queries. It also notes the scope (v1 supports company-financial claims) but does not explicitly state when not to use or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.