Skip to main content
Glama

Server Details

OpenStates MCP — bills, legislators, votes in all 50 US states

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-openstates
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 15 of 15 tools scored.

Server CoherenceC
Disambiguation3/5

Tools span multiple domains (financial data, legislative data, memory management) but some overlap exists: ask_pipeworx and validate_claim both answer free-form questions; entity_profile and get_bill/get_legislator both provide entity information. While most tools have distinct purposes, the mixture of domains and overlapping question-answering tools creates moderate ambiguity.

Naming Consistency3/5

Most tools use snake_case verb_noun pattern (get_bill, search_bills, resolve_entity), but several deviate: forget, recall, remember are single verbs; entity_profile, recent_changes, pipeworx_feedback are noun_noun or descriptive. This inconsistency makes the naming pattern less predictable.

Tool Count2/5

15 tools is high for a server named 'Openstates' which only provides 4 legislative tools. The remaining 11 tools are unrelated to state legislature data (financial, memory, feedback), making the set feel bloated and unfocused for the server's stated purpose.

Completeness3/5

For the legislative domain, the set covers basic CRUD-like operations (get_bill, get_legislator, search_bills, search_legislators) but lacks tools for committees, votes, or detailed bill actions. The inclusion of many non-legislative tools does not fill these gaps, leaving the legislative surface incomplete.

Available Tools

15 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It explains that the tool routes across many sources, picks the right tool, fills arguments, and returns results. However, it does not mention side effects, authentication needs, rate limits, or whether it modifies data. As a query tool, it is likely read-only, but that is not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is about five sentences long, front-loaded with the purpose and followed by examples. It is clear and relatively concise, though slightly verbose with the list of data sources. Every sentence contributes meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (300+ sources, routing), the description provides a good overview and examples. However, it lacks details on output format, error handling, limitations, or usage restrictions. With no output schema, more information about the return value would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter 'question' with a brief description. The description adds significant value by explaining the nature of the question, providing examples, and detailing how the tool processes the query. Schema coverage is 100%, but the description enhances understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that the tool answers natural-language questions by picking the right data source, using specific verbs ('Answer') and resource ('natural-language question'). It distinguishes itself from sibling tools (e.g., search_bills, compare_entities) by being a general routing tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: when a user asks questions like 'What is X?' or 'Find Z' and you don't want to figure out which specific tool to call. It provides multiple examples but does not explicitly state when not to use it or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Given no annotations, the description discloses key behaviors: data sources (SEC EDGAR/XBRL for companies, FAERS for drugs) and returned data (paired data + citation URIs). Could be more detailed on output structure or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, dense paragraph front-loading core purpose. Every sentence adds value, no fluff. Efficiently covers usage triggers and type-specific details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description mentions paired data and citation URIs. Could specify output format more, but given the tool's complexity and lack of annotations, it provides sufficient context for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds meaning by explaining what data is pulled for each type and the format of values (tickers/CIKs vs drug names). This goes beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it compares 2-5 companies or drugs side by side, with specific examples of user queries. It distinguishes itself from sibling tools like entity_profile by emphasizing efficiency and batch comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit triggers ('compare X and Y', 'X vs Y', etc.) and use cases (tables/rankings). Lacks explicit when-not-to-use guidance, but the positive indicators are strong.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the burden. It explains that the tool returns 'the top-N most relevant tools with names + descriptions.' It does not detail any side effects, but for a read-only discovery tool, this is sufficient. Could mention it does not modify data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loads the purpose, gives concrete examples, and ends with a clear directional statement. Every sentence provides value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (no output schema, no nested objects), the description covers the essential behavior: searching tools and returning a list of names and descriptions. It also positions the tool relative to peers, making it complete for its context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds value by providing concrete query examples and clarifying limit defaults/max, going beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Find tools by describing the data or task' and provides specific examples like SEC filings, FDA drugs, etc. It distinguishes from siblings by positioning itself as a first-call discovery tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs 'Use when you need to browse, search, look up, or discover what tools exist' and 'Call this FIRST when you have many tools available and want to see the option set.' This provides strong guidance on when and how to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description details what the tool returns (SEC filings, fundamentals, patents, news, LEI) and specifies input constraints (ticker or CIK). It does not cover rate limits or pagination, but the behavioral disclosure is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficient and front-loaded with the core purpose. It is a single paragraph but packs in use cases, inputs, and return data without redundancy. Slightly verbose but every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of aggregating multiple data sources and the absence of an output schema, the description covers key return categories and input handling. It could elaborate on response format or error cases, but it is largely complete for the tool's profile nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters are described in the schema with 100% coverage. The description adds significant value by explaining that type only supports 'company', value accepts ticker or CIK, and advises using resolve_entity for names, going beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get everything about a company in one call' and lists specific user queries like 'tell me about X' and 'research Microsoft'. It distinguishes itself from alternatives by noting it replaces calling 10+ tools across multiple data sources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly defines when to use (user asks for company info) and when not to (when only a name is provided, instructs to use resolve_entity first). Also mentions the alternative of calling multiple pack tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It indicates destructive action ('Delete') but doesn't mention irreversibility, auth needs, or side effects, leaving some behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the purpose and followed by usage guidelines. Every sentence is essential with no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one param, no output schema, no annotations), the description covers purpose, usage, and pairing. Could mention irreversibility for completeness, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'key' described as 'Memory key to delete'. The description adds context by specifying 'by key' and 'previously stored', but adds little beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Delete a previously stored memory by key', which is a specific verb and resource, and distinguishes from siblings like 'remember' and 'recall' by focusing on deletion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly specifies when to use: 'Use when context is stale, the task is done, or you want to clear sensitive data' and suggests pairing with siblings 'remember and recall', providing clear guidance on alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_billAInspect

Fetch a single bill with full detail: versions, sponsorships, related/companion bills, actions, and votes. Pass the OpenStates ID (e.g., "ocd-bill/abc...") or a state/session/identifier triple.

ParametersJSON Schema
NameRequiredDescriptionDefault
sessionNoSession ID (use with jurisdiction + identifier)
identifierNoBill identifier within the session (e.g., "AB-123")
jurisdictionNoState code (use with session + identifier)
openstates_idNoOpenStates bill ID (preferred, e.g., "ocd-bill/...")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries the burden. It discloses that it fetches a single bill with full details, listing components, implying a read-only operation. No contradictions, but lacks deeper details like auth or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with front-loaded purpose and clear instructions, no redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description adequately lists return fields (versions, sponsorships, etc.). Input options are fully covered. For a single-bill fetch, the description is complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for each parameter. The description adds value by explaining the two usage modes (OpenStates ID or triple), which is not in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Fetch a single bill with full detail', specifying the verb and resource, and enumerates what details are included (versions, sponsorships, etc.), effectively distinguishing it from siblings like search_bills.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on how to pass input: either an OpenStates ID or a jurisdiction/session/identifier triple. Does not compare with alternatives, but the usage context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_legislatorAInspect

Fetch a single legislator by OpenStates person ID. Returns biographical info, current roles, prior offices, contact methods, sources.

ParametersJSON Schema
NameRequiredDescriptionDefault
person_idYesOpenStates person ID (e.g., "ocd-person/...")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior. It lists return fields but does not mention side effects, safety (e.g., read-only), rate limits, or error conditions. Adequate but incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single efficient sentence, front-loaded with core functionality, and contains no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description sufficiently outlines what is returned. However, lacking mention of read-only nature or error handling keeps it from being fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters, but the description adds value by clarifying the ID format (OpenStates person ID) and providing an example pattern, going beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Fetch', the resource 'a single legislator', and the identifier 'OpenStates person ID'. It distinguishes from sibling tools like search_legislators or get_bill, enabling proper tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what the tool does but does not explicitly state when to use it versus alternatives like search_legislators. It implies use when a person ID is known, but lacking usage conditions or exclusions lowers the score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description bears full burden. It discloses that the tool is free, rate-limited to 5 per identifier per day, and does not count against tool-call quota. It also explains how feedback is processed (digests read daily).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the primary purpose. Each sentence adds value, though it could be slightly more concise without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a feedback tool with 3 params and no annotations or output schema, the description is comprehensive: it covers all feedback types, usage scenarios, behavioral traits, and formatting guidelines. Nothing essential is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds significant value by explaining the semantics of the enum values (e.g., 'bug = something broke or returned wrong data') and providing context on how to write the message and context fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: submitting feedback (bug, feature, data_gap, praise) to the Pipeworx team. It distinguishes itself from sibling tools like ask_pipeworx or discover_tools by focusing on reporting issues or praise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance for each feedback type (bug, feature, data_gap, praise) and what not to do (don't paste end-user prompt). It also mentions rate limits and quota.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description covers key behaviors: dual mode (list vs retrieve), scoping to identifier, and pairing with remember/forget. Does not mention idempotency or return format, but for a simple read operation it is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with core action, followed by usage context and pairing. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool with one optional parameter and no output schema, the description is nearly complete. Could mention return format (string vs list) but not essential for understanding usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and description adds context: 'omit to list all keys' and references to 'remember'. This goes beyond the schema by explaining the optionality and the purpose of the key.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves a value by key or lists all keys if omitted, linking directly to sibling tools 'remember' and 'forget'. The verb-resource pair is specific and distinct from siblings like 'ask_pipeworx'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly describes when to use (look up previously stored context) and mentions scoping (anonymous IP, key hash, account ID). Lacks explicit when-not-to-use or alternatives, but the pairing with remember/forget provides sufficient guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description fully discloses behavior: parallel fan-out to three sources, accepted date formats, and return structure. It does not cover potential errors, rate limits, or authentication needs, but the disclosed details are sufficient for basic usage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph with front-loaded purpose. It includes useful examples and details but could be more concise; some sentences add minor redundancy. Still, it is well-structured and informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description provides adequate context: inputs, parallel execution, and output shape. It lacks mention of error handling or limitations, but for a query tool, it covers essential aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the description still adds significant value by explaining accepted formats for 'since' (ISO date or relative shorthand) and that 'value' can be a ticker or CIK. It clarifies the 'type' restriction to 'company' only, going beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: retrieving recent changes for a company. It provides example user queries and specifies the data sources (SEC EDGAR, GDELT, USPTO), making it highly clear and distinguishable from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit usage guidance such as 'Use when a user asks...' and provides numerous example queries. However, it does not mention when not to use the tool or suggest alternatives, which would enhance guidance further.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses scoping ('scoped by your identifier'), persistence ('Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours'), and the key-value nature. However, it lacks details on overwriting behavior or limits on storage size.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with four sentences, each adding value. The purpose is front-loaded, and the structure is logical: purpose, usage, scoping, paired tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple storage tool with 2 parameters and no output schema, the description covers all necessary aspects: when to use, what it stores, scoping, retention, and related tools. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. The description adds contextual examples (e.g., 'subject_property', 'target_ticker') but does not significantly enhance the schema's meaning. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Save data the agent will need to reuse later' with concrete examples like 'a resolved ticker, a target address'. It distinctly differentiates from siblings 'recall' and 'forget' by mentioning them explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use: 'when you discover something worth carrying forward'. It also alludes to alternatives (recall, forget) and explains persistence behavior (authenticated vs anonymous), though it could explicitly state when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It implies read-only behavior (lookup) and mentions returning IDs plus citation URIs. However, it does not disclose permissions, error handling, or data freshness. Adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph of moderate length, front-loading the main purpose. It is efficient but could be more structured (e.g., bullet points) to improve scannability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains return values (IDs, citation URIs) and gives examples. It does not cover error cases or missing entities, but for a simple lookup tool it is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and description adds significant context: explains the enum values (company, drug) with examples for the value field (ticker, CIK, name for company; brand or generic name for drug), going beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Look up the canonical/official identifier for a company or drug.' It specifies the types of identifiers returned (CIK, ticker, RxCUI, LEI) and gives concrete examples, distinguishing it from sibling tools like entity_profile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance: 'Use when a user mentions a name and you need the CIK...' and 'Use this BEFORE calling other tools that need official identifiers.' It also claims to replace multiple lookup calls. Lacks explicit when-not-to-use but strong overall.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_billsAInspect

Search bills in any US statehouse. Pass jurisdiction as a 2-letter state code (e.g., "CA", "NY", "TX") or full name. Returns bill identifiers, titles, classifications, last action, sponsors, and OpenStates IDs for use with get_bill.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-based page (default 1)
sortNoupdated_desc | updated_asc | first_action_desc | first_action_asc | latest_action_desc | latest_action_asc
queryNoFree-text search across title/summary
sessionNoSession identifier (e.g., "20232024")
sponsorNoLegislator name filter
per_pageNo1-50 (default 20)
jurisdictionYes2-letter state code or jurisdiction name
classificationNobill | resolution | constitutional amendment | etc.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full burden. It states the tool searches and returns summary data, but does not disclose pagination behavior, error handling, rate limits, or the effect of sorting. Adds some value but not rich behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. Front-loaded with purpose. Highly concise and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, no output schema, and no annotations, the description explains jurisdiction format and return fields, which aids understanding. Lacks guidance on pagination limits or error scenarios, but is largely adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline is 3. Description adds value by explaining jurisdiction format and return fields, but does not provide additional meaning for other parameters beyond the schema. Adequate but not exceptional.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches bills in US statehouses, specifies input format (2-letter code or full name), and lists return fields. It distinguishes from sibling tools like search_legislators and get_bill by focusing on bills and mentioning integration with get_bill.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning chaining with get_bill, but does not explicitly state when to use vs alternatives or when not to use. It provides clear context for a search tool but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_legislatorsAInspect

Find state legislators. Filter by jurisdiction, name, chamber (upper/lower), party, or district. Returns name, current/prior roles, party, contact details, OpenStates IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoName fragment
pageNo1-based page
partyNoParty name (e.g., "Democratic", "Republican")
districtNoDistrict identifier
per_pageNo1-50 (default 20)
jurisdictionNo2-letter state code or name
org_classificationNoupper | lower | legislature
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must cover behavior. It lists return fields but omits pagination details (e.g., page and per_page are in schema but not mentioned). No mention of side effects, auth, or rate limits, but search tools are inherently safe reads.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences with front-loaded purpose. Every word adds value, and there is no repetition of schema information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 7 optional parameters and no output schema, the description adequately covers purpose, filters, and return fields. It missing mention of pagination behavior, but the schema's page and per_page parameters imply it. Overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so schema already documents all parameters. The description groups filters (name, party, etc.) but adds no new semantic detail beyond the existing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Find state legislators' with a specific verb and resource. It lists filter fields and return fields, distinguishing it from sibling tools like 'get_legislator' which targets a single legislator.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides filter options and return fields, giving some context on when to use the tool. However, it lacks explicit guidance on when not to use it or comparisons with alternatives like 'get_legislator' or 'search_bills'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full weight. It discloses the operation (fact-checking), constraints (company-financial claims only), output format (verdict, structured form, actual value with citation, percent delta), and even notes efficiency gains (replaces 4–6 sequential calls). It does not mention authorization or side effects, but as a read-only tool this is acceptable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph of four sentences, efficiently conveying purpose, use case, scope, output, and efficiency. It is front-loaded with the core verb and resource. Slightly more could be trimmed, but overall well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is remarkably complete. It covers what, when, how (domain), output details, and even why it's valuable. No additional context seems necessary for an agent to select and invoke this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (single 'claim' parameter well-described in schema). The description reiterates the parameter's purpose and adds example claims, but does not provide significant additional semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Fact-check, verify, validate, or confirm/refute') and the resource ('natural-language factual claim against authoritative sources'). It specifies the domain (company-financial claims via SEC EDGAR/XBRL) and lists output types, making the purpose highly specific and distinct from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases ('Use when an agent needs to check whether something a user said is true') with concrete example questions. It mentions scope limitations ('v1 supports company-financial claims'), but does not explicitly advise against use for other claim types or suggest alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.