Skip to main content
Glama

Server Details

eBird MCP — Cornell Lab of Ornithology citizen-science bird observations

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-ebird
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 16 of 16 tools scored. Lowest: 3.4/5.

Server CoherenceC
Disambiguation2/5

The tool set contains a mix of general-purpose data tools (e.g., ask_pipeworx, entity_profile) and eBird-specific tools. An agent could easily confuse which tool to use for birding vs. financial data, leading to misselection.

Naming Consistency3/5

Tool names are mostly in snake_case and follow a verb_noun pattern (e.g., find_species, list_subregions), but there are deviations like nearby_observations (adjective_noun) and ask_pipeworx (verb_noun but odd). Overall, the pattern is moderately consistent.

Tool Count2/5

With 16 tools, the count is high for a birding-focused server, and many tools are unrelated to eBird (e.g., compare_entities, validate_claim). This suggests poor scoping; only about 5 tools are relevant to birding.

Completeness2/5

For eBird birding, the server provides basic species lookup, region listing, and observation queries, but lacks submission, identification, or historical data tools. The additional general-purpose tools do not fill these gaps, leaving the core domain incomplete.

Available Tools

16 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that the tool routes across many sources, picks the right tool, fills arguments, and returns results. However, it does not mention whether it is read-only, any potential side effects, rate limits, or error handling. This leaves some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that is somewhat lengthy. It is front-loaded with the core purpose, but the inclusion of multiple examples adds bulk. Could be more concise by separating usage instructions from examples, but each sentence is relevant.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one simple parameter and no output schema, the description covers the main aspects: what it does, when to use it, and provides examples. However, it lacks information about the return format (e.g., text or structured data) and does not address limitations or edge cases, making it slightly incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter 'question' described as 'Your question or request in natural language'. The tool description adds significant value by providing example queries (e.g., 'What is the US trade deficit with China?'), which helps the agent understand the scope and format. Thus, it exceeds the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: answering natural-language questions by automatically selecting the appropriate data source. It uses specific verbs like 'Answer' and 'pick', and distinguishes itself from sibling tools by noting it routes across many sources, so the user doesn't need to choose manually.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when to use the tool: when a user asks questions like 'What is X?', 'Look up Y', etc., and the agent doesn't want to figure out which Pipeworx tool to call. It implicitly contrasts with using specific tools like compare_entities or entity_profile, offering clear context for selection. Not providing explicit 'when not to use' but sufficient for general use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses data sources (SEC EDGAR/XBRL, FAERS), fields pulled, and return format (paired data + citation URIs). Communicates it replaces many sequential calls.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Approximately 100 words, front-loaded with main purpose, then detailed breakdown by type. Every sentence adds distinct information, no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with two entity types, no output schema, and no annotations, the description covers inputs, behavior, and output adequately. Could mention pagination or limits, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters with descriptions. Description adds context: explains 'type' enum values, gives concrete examples for 'values' (tickers vs drug names). Adds meaningful usage guidance.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states specific verb 'compare', target entities (companies/drugs), and number limits (2-5). Distinguishes from sibling 'entity_profile' by focusing on side-by-side comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit use-case triggers ('compare X and Y', 'X vs Y') and explains what each type returns. Does not explicitly state when not to use, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It states it returns 'top-N most relevant tools with names + descriptions,' implying a read-only, non-destructive operation. No mention of auth or rate limits, but the behavioral scope is adequately disclosed for a discovery tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, well-structured, and front-loaded with the core purpose. The list of example domains adds value without redundancy. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (no output schema, full parameter coverage), the description fully covers what the agent needs: what it does, when to use, and what it returns. It is complete for its complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds no extra meaning beyond the schema. The query parameter is described as 'natural language description' and limit as 'maximum number,' which matches the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find tools by describing the data or task.' It uses a specific verb ('discover tools') and distinguishes itself from siblings by focusing on tool discovery, not data retrieval or analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use ('Use when you need to browse, search, look up...') and provides a strategic recommendation: 'Call this FIRST when you have many tools available and want to see the option set.' This guides the agent away from using it when a specific tool is already known.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses what the tool returns: recent SEC filings, revenue/net income/cash position, USPTO patents, news mentions, and LEI, with citation URIs. Clearly a read operation, no destructive actions implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Slightly lengthy but every sentence earns its place. Front-loads purpose and usage examples. Could trim 'pipeworx:// citation URIs' but it's informative. Still reasonably concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description enumerates all return categories (filings, fundamentals, patents, news, LEI), making the tool's behavior fully predictable. Complete for the complexity of aggregating multiple sources.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% but description adds critical context: 'type' is only 'company' currently, 'value' can be ticker or CIK, and names are not supported (delegating to resolve_entity). Adds meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get everything about a company in one call' with a specific verb and resource. Distinguishes from siblings by contrasting with calling 10+ separate tools, and provides clear usage examples.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit examples of user queries that should trigger this tool ('tell me about X', 'give me a profile of Acme', etc.) and explicitly states when not to use it (names not supported—use resolve_entity first).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_speciesAInspect

Search the eBird taxonomy by common or scientific name. Returns the eBird species code needed by the observation tools.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesCommon or scientific name
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description effectively communicates the tool's behavior: it searches by name and returns a code. While it does not detail edge cases or limitations, the simple nature of the tool means the description is sufficiently transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise, front-loaded sentences with no superfluous words. Every sentence adds essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple one-parameter tool with no output schema, the description fully covers the necessary context: what it searches, what it returns, and why it's needed. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema defines 'query' as a string. The description adds value by explaining that the query is a common or scientific name and that the output is a species code, providing context beyond the schema alone. Schema coverage is 100%, but the description enriches the semantic understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Search the eBird taxonomy' with a specific verb and resource, and explains the output (eBird species code) and its relevance to observation tools. It clearly distinguishes itself from sibling observation tools like 'nearby_observations' and 'recent_observations'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use as a prerequisite for observation tools ('needed by the observation tools'), but does not explicitly state when to use vs alternatives. Since no other search tools exist among siblings, the usage context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden. It discloses the destructive action of deletion but doesn't detail permanence or consequences. However, for a simple key-based delete, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no unnecessary words. Front-loaded with the core action. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description fully covers purpose, usage context, and relationships to sibling tools. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema already describes the 'key' parameter as 'Memory key to delete'. The description adds 'by key' but no extra semantics beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Delete a previously stored memory by key.' The verb 'Delete' and resource 'memory' are specific. It distinguishes from siblings by mentioning pairing with 'remember' and 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides context: 'Use when context is stale, the task is done, or you want to clear sensitive data.' Also suggests pairing with 'remember' and 'recall', offering guidance on when to use vs alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_subregionsAInspect

Child regions of a parent. region_type is "country", "subnational1" (states/provinces), or "subnational2" (counties). parent_region_code uses eBird codes ("world", "US", "US-CA").

ParametersJSON Schema
NameRequiredDescriptionDefault
region_typeYescountry | subnational1 | subnational2
parent_region_codeNoParent region (default "world")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It explains the input parameters and valid values but does not disclose any behavioral traits like error handling, rate limits, or output format. It is minimally adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no redundant information. It is front-loaded with the core purpose and immediately provides necessary context about region types and codes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity, the description is adequate but could be improved by mentioning what the output looks like (e.g., list of region codes/names) or any limitations like maximum depth. It lacks full contextual completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining eBird code examples and clarifying default parent region, which improves understanding beyond the schema's basic descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it lists child regions of a parent, specifying region types and parent code format. This clearly identifies the tool's function, though it doesn't explicitly use the verb 'list' in the first sentence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context about region types and parent codes but does not specify when to use this tool over siblings or any exclusions. It implies usage for hierarchical region queries but lacks explicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

nearby_observationsAInspect

Recent observations within a radius of a lat/lon. Useful for "what birds are around here right now."

ParametersJSON Schema
NameRequiredDescriptionDefault
backNoDays back (1-30, default 14)
dist_kmNoRadius in km (1-50, default 25)
latitudeYesLatitude
longitudeYesLongitude
max_resultsNo1-10000 (default 100)
species_codeNoOptional species filter
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description implies a read-only query by its wording ('recent observations'). It does not contradict any annotations (none exist). Adding an explicit 'read-only' statement would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence plus a usage phrase, front-loading the key action and scope. Every word contributes value—no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (list recent observations within a radius) and full schema coverage, the description adequately covers the purpose. No output schema exists, so return format details are not required. Could briefly mention it returns a list of bird observations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are already documented. The description adds no new parameter-specific meaning beyond what the schema provides. It references the radius and lat/lon but in a general sense.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns recent observations within a radius of a lat/lon, using a specific verb ('list recent observations') and resource ('observations'). It differentiates from siblings like 'recent_observations' by emphasizing geographic filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a clear usage context ('useful for what birds are around here right now'), implying when to use this tool. However, it does not explicitly state when not to use it or mention alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full behavioral disclosure. It discloses rate limits (5 per identifier per day), that it's free and doesn't count against quota, and how feedback is used (team reads daily, affects roadmap). It could mention that no immediate response is returned, but that's implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph but is densely informative without unnecessary words. Each sentence serves a purpose. It could be slightly more structured (e.g., bullet points) for quick scanning, but it is not overly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and the tool's simplicity, the description covers all necessary aspects: purpose, usage, parameter details, and behavioral quirks. The context signal of nested objects is addressed. It feels complete for a feedback submission tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds value by explaining the enum values for 'type' in detail, clarifying that 'context' is optional and what each field means, and giving guidance on the 'message' content. This enhances understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool's purpose: sending feedback to the Pipeworx team about bugs, missing features, or praise. It distinguishes itself from sibling tools by being the only feedback mechanism, and the description explicitly lists specific use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool (bug, feature/data_gap, praise) and what not to do (don't paste the end-user's prompt). It also mentions rate limits, cost, and quota impact, helping the agent decide appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It explains the core behaviors: retrieve by key or list all keys, and scoping. It does not detail edge cases like missing keys, but for a simple read operation this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is only three sentences, front-loaded with the primary action, and every sentence provides necessary context without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one optional parameter, the description is fully adequate: it explains what it does, when to use it, scoping, and how it relates to sibling tools. No output schema is needed for this type of tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers the parameter well (100% coverage). The description adds value by providing concrete examples ('the user's target ticker, an address, prior research notes') and explaining scoping, which goes beyond the schema's bare description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: 'Retrieve a value previously saved via remember, or list all saved keys (omit the key argument).' It uses specific verb-resource pairs and distinguishes from siblings by referencing 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool ('Use to look up context the agent stored earlier... without re-deriving it from scratch') and provides alternatives by mentioning 'remember' and 'forget'. It also explains scoping ('Scoped to your identifier').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description details fan-out behavior, return structure, and parameter formats. Since no annotations are provided, it carries the full burden, but it lacks explicit statements about read-only nature, side effects, or safety for repeated calls.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with 6-7 sentences, front-loaded with purpose and examples, and ends with output details. Every sentence adds value, though slight redundancy in examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains what the tool returns (structured changes + count + URIs). It covers all parameters and usage context, but could detail the output structure more.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds extra meaning: explains 'since' formats with examples ('7d','30d','3m','1y'), suggests typical values, and clarifies 'value' accepts ticker or CIK. This goes beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'What's new with a company' and lists multiple example queries. It distinguishes itself from sibling tools like 'recent_notable' by focusing on company changes across SEC, GDELT, and USPTO.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit user query examples and explains when to use it. However, it does not explicitly state when not to use it or compare to alternatives like 'entity_profile' or 'recent_observations'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_notableAInspect

Only notable (rare / out-of-range / first-of-season) sightings in a region. Sometimes the most interesting subset.

ParametersJSON Schema
NameRequiredDescriptionDefault
backNoDays back (1-30, default 14)
detailNosimple | full (default simple)
max_resultsNo1-10000 (default 100)
region_codeYeseBird region code
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states the selection criteria (notable) but lacks behavioral details like ordering, pagination, or what 'sometimes' means. Limited transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words, front-loaded with key purpose and criteria.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so return format is unspecified. Lacks usage context or prerequisites. Adequate but incomplete for a tool with no annotations and no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so all parameters have descriptions in the schema. The description adds no additional meaning beyond schema, resulting in baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns only notable sightings (rare, out-of-range, first-of-season) in a region, distinguishing it from siblings like recent_observations and nearby_observations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context (when you want notable sightings), but no explicit when-not or alternatives mentioned. Clear context but no exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_observationsBInspect

Recent bird sightings in a region. region_code is the eBird identifier — countries are 2-letter ("US", "GB"), states "US-CA", counties "US-CA-075", and birding hotspots use the "L" code from eBird (e.g., "L99381"). Optionally filter to one species.

ParametersJSON Schema
NameRequiredDescriptionDefault
backNoDays back (1-30, default 14)
max_resultsNo1-10000 (default 100)
region_codeYeseBird region code
species_codeNoOptional eBird species code (use find_species to look up)
include_provisionalNoInclude unconfirmed observations (default false)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With empty annotations, the description must disclose behavioral traits. It explains the region_code format and the optional species filter, but does not mention rate limits, pagination behavior, data freshness, or what happens if no results are found. The description is insufficient for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, consisting of two sentences that get straight to the purpose. The first sentence states what the tool does, and the second adds important parameter context. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given there is no output schema and five parameters, the description could be more complete. It does not explain the return format (e.g., list of species counts or detailed observations) or how max_results and back parameters affect results. The description adequately covers the region_code parameter but leaves other behavioral details unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, but the tool description adds valuable context beyond the schema: it explains the region_code format with examples ('US', 'US-CA', 'US-CA-075', 'L99381') and mentions that species_code can be looked up via 'find_species'. This helps an agent use the parameters correctly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides 'recent bird sightings in a region' with a specific verb and resource. It explains the region_code format in detail. However, it does not explicitly differentiate itself from sibling tools like 'nearby_observations' or 'recent_notable', which could cause confusion about when to use this tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions optionally filtering to one species, which gives some usage context. However, it lacks explicit guidance on when to use this tool vs. alternatives, such as neighboring tools for different scopes (e.g., 'nearby_observations' vs. 'recent_observations'). No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: key-value storage, scoping by identifier, persistence differences (24 hours for anonymous, permanent for authenticated), and pairing with recall/forget.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single well-structured paragraph with no fluff. Front-loaded with purpose, then usage, then details. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple store tool without output schema, the description is complete enough. It covers storage, persistence, and pairing. Could explicitly mention return behavior (e.g., success confirmation), but not critical for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with descriptions for both parameters. Description adds meaningful context beyond schema: explains key-value pair nature, gives naming examples, and clarifies value is any text. Adds value beyond baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Save data' and the resource 'data the agent will need to reuse later', with explicit scope. It distinguishes from siblings by mentioning recall and forget, and provides concrete examples (resolved ticker, target address).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use ('when you discover something worth carrying forward') and provides examples. Mentions alternatives (recall, forget) and context (authentication affects persistence).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses returns IDs plus citation URIs. No annotations, so description carries burden; it covers core behavior but omits potential limitations or permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five concise sentences, no filler. Key info front-loaded: purpose, usage context, examples, and return type.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully describes tool for a simple lookup with high schema coverage. No output schema needed; return values mentioned. No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage; description adds examples and explains accepted formats for company and drug, enhancing understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies exact action: look up canonical identifiers for company or drug. Distinguishes from siblings like entity_profile by focusing on identifier resolution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'use this BEFORE calling other tools' and replaces multiple lookups. Lacks explicit when-not-to-use, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description fully details behavior: v1 supports company-financial claims via SEC EDGAR/XBRL, returns verdict types, structured form, actual value with citation, and percent delta. It also notes it replaces multiple sequential calls.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is informative but slightly verbose; every sentence adds value. It is front-loaded with purpose and usage, then details returns and scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with explicit returns and domain scope, the description is fairly complete. No output schema is provided, but the return types are described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single required parameter 'claim' has 100% schema coverage with a description. The tool description adds significant context: natural-language format, examples, and that it focuses on company-financial claims.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fact-checks natural-language claims against authoritative sources, with specific examples and return values. It distinguishes itself from sibling tools which are unrelated to fact-checking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use the tool (to verify factual claims) and provides example query phrasings. However, it does not specify when not to use it or mention alternative tools, though the narrow domain implies scope.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.