Skip to main content
Glama

Openaire

Server Details

OpenAIRE MCP — EU research outputs (publications, datasets, software, projects)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-openaire
GitHub Stars
0
Server Listing
mcp-openaire

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 4/5 across 17 of 17 tools scored. Lowest: 2.5/5.

Server CoherenceA
Disambiguation4/5

Most tools have clearly distinct purposes (e.g., compare_entities, entity_profile, search_*), but ask_pipeworx is a broad catch-all that could overlap with many of them, causing potential ambiguity.

Naming Consistency4/5

Names are mostly lowercase_snake_case with verbs (ask, compare, discover, get, search, validate) and some noun phrases (entity_profile, recent_changes). No camelCase, but minor inconsistency in verb vs. noun-first patterns.

Tool Count5/5

17 tools is well-scoped for a research data server covering companies, drugs, publications, projects, datasets, software, and memory; each tool earns its place.

Completeness4/5

Covers querying, searching, profiling, comparing, and validating claims across multiple domains. Missing CRUD actions for research data, but that's not the server's purpose; minor gaps like direct patent search are covered by ask_pipeworx.

Available Tools

17 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes the routing and result return behavior but does not disclose whether the tool is read-only, has authentication needs, or any side effects. It is adequate but not fully transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences that pack essential information: purpose, when to use, and examples. No fluff, well front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with a single parameter and no output schema, the description covers the core functionality and use cases. It doesn't describe return format or error handling, but the examples and source list make it fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one parameter 'question' with a basic schema description. The tool description adds value by providing example questions that illustrate the scope, which goes beyond the schema's minimal description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers natural-language questions by automatically selecting the right data source. It provides concrete examples and lists over 300 sources, distinguishing it from sibling tools like search_datasets or discover_tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use the tool: for questions like 'What is X?' or 'Look up Y', and that the user doesn't need to figure out which Pipeworx tool to call. It does not explicitly state when not to use it, but the context is strong enough for an agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description explains that for companies it pulls from SEC EDGAR/XBRL, and for drugs it pulls from FAERS, FDA approvals, and trial counts. It mentions return format (paired data + URIs). While it implies non-destructive reads, it could be more explicit about being read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at around 100 words, front-loaded with the main purpose, and contains no superfluous information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains return values (paired data + URIs) and data sources comprehensively. It covers all necessary context for an agent to decide and use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters, but the description adds significant value by explaining what data is pulled for each type and giving examples (e.g., tickers like AAPL, drug names like ozempic). This goes well beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool compares 2-5 companies or drugs side by side, pulling specific financial or adverse event data. It distinguishes itself from siblings by noting it replaces 8-15 sequential agent calls, making it a batch comparison tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use the tool with example user phrases like "compare X and Y" and "X vs Y". It differentiates between company and drug types with distinct data sources. However, it does not explicitly mention when not to use or provide alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description provides transparency about returning top-N relevant tools with names and descriptions. It omits potential edge cases like rate limits or ambiguous queries, but for a simple discovery tool this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise yet informative, starting with the action, listing examples, stating return format, and giving usage guidance. Every sentence adds value; no unnecessary fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the absence of an output schema, the description fully explains what the tool does, when to use it, and what it returns. No gaps detected for effective agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters are fully described in the schema (100% coverage). The description adds context that limit controls the number returned, but no significant new detail beyond the schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds tools by describing a task or data area, listing specific domains like SEC filings, FDA drugs, etc. It distinguishes itself from sibling tools that search for data, making its purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance to use for browsing/searching tools and to call FIRST when many tools are available. While it doesn't list when-not-to-use scenarios, the positive context is strong and effectively guides the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the returned data types (SEC filings, fundamentals, patents, news, LEI) and mentions citation URIs. The verb 'Get' implies read-only, but without annotations, the safety profile is not explicitly stated. It lacks details on rate limits or authentication but is otherwise transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by use cases, then return content, then parameter guidance. It is slightly verbose but every sentence serves a purpose. Minor room for trimming.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description comprehensively lists all returned content categories, covering the complexity of the tool (multiple data sources). It also explains the input format with examples, making the tool fully understandable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds meaningful context beyond the schema: examples of ticker and CIK values, and a crucial note that names are not supported, directing to resolve_entity. This prevents common agent errors.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get everything about a company in one call') and clearly distinguishes the tool's purpose from siblings like resolve_entity and search tools. The listed use cases (e.g., 'tell me about X', 'research Microsoft') directly map to common user intents.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use ('when a user asks...') and when not to use ('Names not supported — use resolve_entity first'). It provides clear alternatives and context, leaving no ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates destructive behavior (delete), but with no annotations, it could disclose more about side effects, error handling (e.g., missing key), or irreversibility.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences answer what, when, and how to pair with other tools. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers purpose, usage, and relations adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the schema description for 'key' is clear. The tool description does not add extra parameter meaning, but the schema is sufficient, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Delete') and resource ('memory by key'), making the action clear. It distinguishes from siblings like 'remember' (store) and 'recall' (retrieve) by implication.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the tool: when context is stale, task is done, or to clear sensitive data. Also recommends pairing with 'remember' and 'recall', providing clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_projectBInspect

Fetch a project by OpenAIRE id.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesOpenAIRE id
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. The description gives only basic functionality and omits any behavioral traits such as authentication requirements, rate limits, or whether the fetch is by exact ID or partial match. Carries full burden but insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, efficient with no superfluous words. However, it could benefit from a bit more context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (1 parameter) and no output schema, the description is minimally adequate but does not explain the return format or behavior (e.g., error handling, field types). Incomplete for a tool with no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter 'id' described as 'OpenAIRE id'. The description adds no meaning beyond the schema, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'fetch', the resource 'project', and the identifier 'OpenAIRE id'. It distinguishes from sibling tools like 'search_projects' which is for searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'search_projects' or 'resolve_entity'. No context on prerequisites or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_research_productAInspect

Fetch a single research product (publication / dataset / software) by OpenAIRE id.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesOpenAIRE id (e.g. "doi_dedup___::abc...")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description is brief and does not disclose error handling, response format, or any side effects. With no annotations, the description carries the full burden but provides only basic action info.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with key information, no unnecessary words. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple fetch tool with one parameter and no output schema, the description covers the essential purpose and identifier. Minor gaps in behavior on missing ID or response shape, but adequate for a straightforward lookup.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a parameter description already present. The tool description repeats the identification method but does not add new semantic details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Fetch', resource 'single research product', specifies types (publication/dataset/software), and identification method (OpenAIRE id). It distinguishes from sibling tools like 'get_project' and search-based tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for fetching a known product by ID, but does not explicitly state when to use this tool versus alternatives like 'search_publications' or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Even without annotations, the description fully discloses behavioral traits: it mentions rate limiting (5 per identifier per day), that it's free and doesn't count against tool-call quota, and that feedback is read daily and affects roadmap. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (about 6 sentences) and well-structured. It front-loads the purpose, then gives usage guidance, formatting instructions, and behavioral notes. Every sentence adds value, and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's 3 parameters, no output schema, and nested objects, the description is complete. It explains each parameter's purpose, provides examples of valid types, clarifies optional context, and sets expectations about rate limits and impact.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters have schema descriptions (100% coverage), so the description adds marginal value. However, it provides additional context on usage (e.g., specifying that 'message' should be 1-2 sentences and not include user prompt). This is helpful but not essential given schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines the tool's purpose: to provide feedback (bug, feature, data_gap, praise) to the Pipeworx team. It distinguishes itself from sibling tools by being the sole feedback channel, and the description explicitly lists scenarios for use.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool, including specific types (bug, feature, data_gap, praise) and how to describe issues. It also states what not to do (don't paste end-user prompt) and notes rate limits and quota details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description covers key behavioral traits: it is scoped to an identifier (anonymous IP, key hash, account ID) and has two modes (retrieve single key or list all keys). It implies a read-only operation without side effects. Missing details like return format on missing key, but overall adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each adding essential information: first defines primary action and modes, second provides usage examples and rationale, third adds scope and lifecycle pairing. No fluff, front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional param, no output schema), the description covers purpose, usage, scope, and sibling relationships. It could add a note on what happens when key doesn't exist, but the current level is sufficient for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one optional string key. The description adds value by explaining the dual behavior when key is omitted (list all keys) and connecting it to sibling tools ('remember' to save, 'forget' to delete), which goes beyond the schema's simple description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a value saved via 'remember' or lists all keys when the key argument is omitted. It uses specific verbs ('Retrieve', 'list') and names related siblings ('remember', 'forget'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives concrete examples of when to use the tool (recall a target ticker, address, prior research notes) and explains the rationale (avoid re-deriving context). It also pairs with 'remember' and 'forget' for lifecycle management, though it doesn't explicitly state when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses fan-out to three sources (SEC, GDELT, USPTO), accepted date formats, return components (changes, count, URIs). However, it lacks details on error cases, rate limits, or the exact structure of 'structured changes'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that front-loads the purpose, provides concrete usage examples, lists sources, explains date formats, and summarizes the return format. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool fans out to three sources and returns structured data, but the description does not detail the output schema (e.g., fields in each change, aggregation logic). For a complex tool, more completeness in describing the return structure would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds value by explaining the 'since' parameter format (ISO or relative shorthand) with examples, the 'value' parameter accepts ticker or CIK, and the 'type' is restricted to company. This goes beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with 'What's new with a company' and provides example queries like 'what's happening with X?', clearly stating the verb-resource pair. It distinguishes from sibling tools such as 'entity_profile' or 'compare_entities' by focusing on temporal changes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives explicit usage scenarios with example queries (e.g., 'what's happening with X?', 'brief me on what happened with Microsoft this quarter') and suggests typical monitoring values like '30d'. It does not explicitly state when not to use, but the examples clearly imply the use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses persistence details (24 hours for anonymous, persistent for authenticated), key-value scoping, and identifier scope, without contradictions (no annotations provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise 4-sentence description, front-loaded with purpose, no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately explains tool behavior, storage duration, and pairing with recall/forget, despite no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, baseline 3. Description adds value with examples of typical keys and values, slightly enhancing understanding beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Save data the agent will need to reuse later' and specifies cross-session/conversation usage, distinguishing it from siblings like recall and forget.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says when to use (discovering something worth carrying forward) and references sibling tools recall and forget for complementary actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It describes the return format (IDs + pipeworx:// URIs) but does not explicitly state that the tool is read-only or disclose potential side effects, rate limits, or authentication requirements. The mutation implications are minimal (lookup), but more explicit safety assurances would improve score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two short paragraphs) with no superfluous information. It front-loads the primary function, follows with examples and usage guidance, and ends with a benefit summary. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple lookup tool with only two parameters and no output schema, the description covers input usage, output format, and invocation context. It does not address error handling or multiple matches, but given the tool's complexity, this is a minor gap. An ideal description might also mention that it standardizes ambiguous names.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters are fully described in the schema (100% coverage). The description adds significant value by providing concrete examples for parameter values (e.g., 'AAPL' for company ticker, '0000320193' for CIK) and mapping them to output IDs (ticker, RxCUI). This contextual enrichment goes well beyond the schema's enum and type constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool resolves names to canonical identifiers (CIK, ticker, RxCUI, LEI) for companies or drugs, with specific examples. It distinguishes itself as a prerequisite for other tools needing official IDs, setting it apart from siblings like entity_profile or search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('when a user mentions a name and you need the ... ID systems that other tools require') and provides a clear directive ('Use this BEFORE calling other tools that need official identifiers'). Examples illustrate typical scenarios, leaving no ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_datasetsCInspect

Search research datasets.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-based page (default 1)
sizeNoPage size, 1-100 (default 20)
queryYesFree-text query (matches title/abstract/keywords)
to_yearNo
from_yearNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, and the description only says 'search' with no details on side effects, authentication, rate limits, or return format. The agent cannot infer behavioral traits beyond the basic operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely short (3 words), but under-specified. While concise, it omits necessary detail for a tool with 5 parameters and no output schema. The brevity harms completeness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should explain return format or pagination behavior. It does not. Additionally, lacks context on search scope, indexing, or results structure. Incomplete for informed agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 60% (3 of 5 params described). The description adds no parameter meaning beyond the schema. Two parameters (from_year, to_year) lack schema descriptions and are not elaborated in the tool description, leaving a gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches research datasets, which is a specific verb+resource. However, it does not differentiate from sibling tools like search_projects or search_publications, leaving ambiguity about what datasets specifically entails.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description lacks context on prerequisites, limitations, or when to prefer other search tools (e.g., search_projects for projects, search_publications for papers).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_projectsAInspect

Search funded research projects (CORDIS for EC; also NSF, NIH, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-based page (default 1)
sizeNoPage size, 1-100 (default 20)
queryYesFree-text query (matches title/abstract/keywords)
funderNoFunder short name
countryNoCoordinator country (ISO alpha-2)
from_yearNoEarliest project start year
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description only mentions the data sources, failing to disclose behavioral traits such as being read-only, pagination behavior, or any potential side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence that starts with the verb, efficiently conveying the primary purpose without excess words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the schema covers all parameters, the description lacks details on result format or expected behavior, leaving the agent with incomplete context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the schema already describes parameters. The description adds value by indicating possible funder values (EC, NSF, NIH), which supplements the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for funded research projects, citing specific databases (CORDIS, NSF, NIH), which distinguishes it from siblings like search_publications and search_datasets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies it covers multiple funders but does not explicitly specify when to use this tool over alternatives like search_datasets or search_publications, leaving ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_publicationsCInspect

Search scholarly publications (articles, preprints, theses, books).

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-based page (default 1)
sizeNoPage size, 1-100 (default 20)
queryYesFree-text query (matches title/abstract/keywords)
funderNoFunder short name — e.g. "EC", "NIH", "NSF", "WT"
countryNoCountry code — ISO 3166-1 alpha-2
to_yearNoLatest publication year (YYYY)
from_yearNoEarliest publication year (YYYY)
open_accessNoRestrict to open-access publications
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It only implies a read operation but fails to disclose any behavioral details such as result format, sorting, rate limits, or authentication needs. The minimal description does not compensate for missing annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence (9 words) with no fluff. It is front-loaded with the key action. However, for a search tool with eight parameters and no output schema, slightly more context would be beneficial.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (8 parameters, no annotations, no output schema), the description is insufficient. It does not explain return values (expected results), pagination, or any constraints beyond what the schema already provides. A more complete description would mention the output type and provide examples.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema; it does not explain parameters further or provide usage examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Search' and specifies the resource as 'scholarly publications' with listed types (articles, preprints, theses, books). This distinguishes it from sibling tools like search_datasets or search_software by resource type. However, no explicit differentiation is made.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives, nor any prerequisites or contextual cues. The description lacks instructions on usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_softwareCInspect

Search research software registrations.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-based page (default 1)
sizeNoPage size, 1-100 (default 20)
queryYesFree-text query (matches title/abstract/keywords)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility for behavioral disclosure, but it only repeats the tool's purpose. It does not mention pagination, result limits, or any side effects, leaving behavior opaque.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence, which is efficient but lacks substance. It does not add value beyond the tool name and could be more informative without compromising brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the context of multiple sibling search tools, the description is incomplete. It does not explain result formats, filtering, or how this search differs from others, leaving agents with insufficient information for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its three parameters, so the schema already provides meaning. The description adds no additional parameter information beyond the resource name, earning a baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Search research software registrations' clearly indicates the verb (search) and resource (research software registrations). It distinguishes from sibling tools like search_datasets or search_projects by specifying the resource type, but does not explicitly differentiate itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like search_datasets or search_projects. The description lacks any context about appropriate use cases or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses key behavioral traits: it uses SEC EDGAR + XBRL for company-financial claims, returns a verdict with structured form and actual value with citation, and replaces 4-6 sequential calls. It does not cover auth or rate limits, but for a read-only factual lookup tool, this is sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise: two sentences covering purpose, usage, scope, and output. No wasted words, front-loaded with the key action. It earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description is complete. It explains the domain, input format, output structure, and provides usage examples. No gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'claim'. The description adds value by providing example claims and clarifying that input is natural language. However, it does not add additional semantics beyond what the schema already provides; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to fact-check, verify, validate, or confirm/refute a natural-language factual claim against authoritative sources. It specificies the domain (company-financial claims) and distinguishes it from siblings by being a specialized tool; no other sibling tool overlaps in function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use the tool: when an agent needs to check whether a user's statement is true. It gives example queries and indicates the scope (v1 supports company-financial claims). No alternative tool is mentioned, but siblings are unrelated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.