Skip to main content
Glama

Datacite

Server Details

DataCite MCP — DOIs for research datasets (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-datacite
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 14 of 14 tools scored. Lowest: 3.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes (DOI lookup, memory, Q&A, comparison, resolution). However, 'ask_pipeworx' (general Q&A) and 'validate_claim' (fact-checking) could cause confusion as both involve answering questions from data. Others are well-separated.

Naming Consistency3/5

Tool names use snake_case but follow different patterns: some are verb_noun (e.g., 'get_doi', 'search_dois'), some are just verbs ('forget', 'recall'), and some embed 'pipeworx' ('ask_pipeworx', 'pipeworx_feedback'). This inconsistency lowers clarity.

Tool Count4/5

14 tools is reasonable for the combined scope of DataCite DOI operations, Pipeworx data queries, and memory utilities. No tool feels superfluous, though the memory trio (remember/recall/forget) adds slight complexity.

Completeness4/5

The tool surface covers DOI search/retrieval, entity resolution, company profiles, comparisons, updates, claim validation, and tool discovery. Minor gaps exist (e.g., no DOI creation), but the core information retrieval and analysis workflows are well-supported.

Available Tools

14 tools
ask_pipeworxAInspect

Answer a natural-language question by automatically picking the right data source. Use when a user asks "What is X?", "Look up Y", "Find Z", "Get the latest…", "How much…", and you don't want to figure out which Pipeworx pack/tool to call. Routes across SEC EDGAR, FRED, BLS, FDA, Census, ATTOM, USPTO, weather, news, crypto, stocks, and 300+ other sources. Pipeworx picks the right tool, fills arguments, returns the result. Examples: "What is the US trade deficit with China?", "Adverse events for ozempic", "Apple's latest 10-K", "Current unemployment rate".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that Pipeworx picks the right tool, fills arguments, and returns the result. It mentions the breadth of sources. It does not cover potential limitations or error handling, but the core behavior is transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured. It starts with the core purpose, explains when to use, lists examples, and mentions the range of sources. Every sentence is informative with no extraneous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one parameter and no output schema, the description is largely complete. It explains functionality, use cases, and gives examples. It could optionally mention response format or limits, but it is sufficient for an agent to understand and use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% coverage with one parameter 'question' described as 'Your question or request in natural language'. The description adds value by providing concrete examples (e.g., 'What is the US trade deficit with China?') and clarifying the scope, which goes beyond the schema's minimal description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states that it answers natural-language questions by automatically selecting the appropriate data source from over 300 sources. It provides specific examples and distinguishes itself from sibling tools which are more specific utilities like compare_entities or get_doi.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use when a user asks... and you don't want to figure out which Pipeworx pack/tool to call.' This gives clear usage context. However, it does not explicitly state when not to use it, such as when a direct tool would be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_entitiesAInspect

Compare 2–5 companies (or drugs) side by side in one call. Use when a user says "compare X and Y", "X vs Y", "how do X, Y, Z stack up", "which is bigger", or wants tables/rankings of revenue / net income / cash / debt across companies — or adverse events / approvals / trials across drugs. type="company": pulls revenue, net income, cash, long-term debt from SEC EDGAR/XBRL for tickers like AAPL, MSFT, GOOGL. type="drug": pulls adverse-event report counts (FAERS), FDA approval counts, active trial counts. Returns paired data + pipeworx:// citation URIs. Replaces 8–15 sequential agent calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valuesYesFor company: 2–5 tickers/CIKs (e.g., ["AAPL","MSFT"]). For drug: 2–5 names (e.g., ["ozempic","mounjaro"]).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses data sources (SEC EDGAR/XBRL for companies, FAERS for drugs), return data types (paired data + citation URIs), and implies read-only behavior. It could explicitly state it is a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two paragraphs, concise and front-loaded with the purpose and usage triggers. Every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description covers purpose, usage triggers, parameter details, data sources, and return format. It is complete for an agent to correctly select and invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, but the description adds significant context: explains what each enum value (company/drug) fetches and gives examples for the values parameter (tickers and drug names), going beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it compares 2-5 companies or drugs side by side, specifying the verb 'compare' and the resource 'entities'. It distinguishes between the two types and provides concrete examples of use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage triggers like 'compare X and Y' and 'X vs Y', and notes that it replaces 8-15 sequential agent calls. However, it does not explicitly list when not to use it or name sibling tools as alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Find tools by describing the data or task. Use when you need to browse, search, look up, or discover what tools exist for: SEC filings, financials, revenue, profit, FDA drugs, adverse events, FRED economic data, Census demographics, BLS jobs/unemployment/inflation, ATTOM real estate, ClinicalTrials, USPTO patents, weather, news, crypto, stocks. Returns the top-N most relevant tools with names + descriptions. Call this FIRST when you have many tools available and want to see the option set (not just one answer).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses it returns top-N most relevant tools with names+descriptions. Is silent on read-only status, authorization needs, or error handling. For a search tool, adequate but could explicitly state non-destructive behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences front-loaded with purpose. The list of example domains is useful but slightly verbose. Overall efficient and to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, input, output (returns top-N tools with names+descriptions). Lacks details like default limit, ordering, or behavior on empty results. Given low complexity, mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters described. Description adds no additional meaning beyond the schema; it uses 'natural language description' for query and mentions default/max limit. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it finds tools by describing data or task, with specific verb 'Find tools' and resource 'tools'. Distinguishes from siblings by focusing on discovery (returns tool names+descriptions) rather than direct actions like ask, compare, or validate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to call it FIRST when browsing/searching, and when wanting to see the option set. Implies not for single-answer tasks. Provides clear context but stops short of explicitly naming alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

entity_profileAInspect

Get everything about a company in one call. Use when a user asks "tell me about X", "give me a profile of Acme", "what do you know about Apple", "research Microsoft", "brief me on Tesla", or you'd otherwise need to call 10+ pack tools across SEC EDGAR, SEC XBRL, USPTO, news, and GLEIF. Returns recent SEC filings, latest revenue/net income/cash position fundamentals, USPTO patents matched by assignee, recent news mentions, and the LEI (legal entity identifier) — all with pipeworx:// citation URIs. Pass a ticker like "AAPL" or zero-padded CIK like "0000320193".

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today; person/place coming soon.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193"). Names not supported — use resolve_entity first if you only have a name.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully discloses the tool's behavior: it returns recent SEC filings, fundamentals, patents, news, and LEI, with citation URIs. There is no contradiction and the description is transparent about what the tool does.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: a clear opening statement, followed by usage examples, then a bullet-like list of returned data. Every sentence earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema and no annotations, the description covers what the tool does, when to use it, parameter details, and returned data. It is complete for an aggregation tool and references related tools where appropriate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds significant value by clarifying that type only supports 'company' and value should be a ticker or CIK, explicitly noting that names are not supported and directing to resolve_entity. This goes beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Get everything about a company in one call,' using a specific verb and resource. It distinguishes this tool from siblings by noting it replaces 10+ other pack tools, making the purpose very clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance with concrete user queries ('tell me about X', 'brief me on Tesla') and states when not to use ('Names not supported — use resolve_entity first'). This is excellent usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a previously stored memory by key. Use when context is stale, the task is done, or you want to clear sensitive data the agent saved earlier. Pair with remember and recall.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden. It correctly indicates a destructive operation ('Delete') but lacks details on what happens if the key does not exist or if the action is irreversible.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the primary action, and every word is purposeful. No wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, no annotations), the description fully covers purpose, usage, and relationships to sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%. The description's mention of 'memory key' adds minimal value beyond the schema description 'Memory key to delete'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and the resource ('a previously stored memory by key'), distinguishing it from sibling tools like 'remember' and 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use ('when context is stale, the task is done, or you want to clear sensitive data') and mentions sibling tools ('Pair with remember and recall').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_doiAInspect

Fetch a single DOI record. Returns full metadata: title(s), creators with affiliations and ORCIDs, abstract, publisher, year, dates, related identifiers, subjects, license, sizes/formats.

ParametersJSON Schema
NameRequiredDescriptionDefault
doiYesDOI (e.g., "10.5281/zenodo.1234567")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full burden. It discloses the return content comprehensively, though it does not mention error handling or limitations for invalid DOIs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence is concise and front-loaded with the action, followed by a list of returned metadata. Every part adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple fetch tool with one parameter, the description covers the return fields completely. No output schema exists, so the description fulfills that role adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a descriptive parameter for 'doi'. The description adds value with an example format ('10.5281/zenodo.1234567'), going slightly beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Fetch a single DOI record' and enumerates all returned metadata fields, distinguishing it from sibling tools like 'search_dois'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving full metadata of a single DOI but does not explicitly state when to use this tool versus alternatives like search_dois for multiple queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_repositoriesAInspect

List DataCite-registered data repositories (e.g., Zenodo, Dryad, Figshare). Filter by query. Useful for "where do I deposit a dataset" lookups.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-based page
queryNoRepository name / description filter
page_sizeNo1-1000 (default 25)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool lists and filters repositories but does not disclose what the response includes (e.g., repository names, descriptions, or metadata), nor does it mention pagination behavior or rate limits. The behavior is straightforward, so the lack of detail is acceptable but not exceptional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exceptionally concise with only two sentences. The first sentence delivers the core purpose and examples, and the second offers a clear use case. Every word is functional, and there is no redundant information. This efficiency allows agents to quickly grasp the tool's utility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (3 optional parameters, no output schema, no nested objects), the description provides sufficient context for a basic list operation. However, it lacks a note about the return format or response structure, which would be helpful for an agent to understand what data it will receive. The description is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% as all three parameters (page, query, page_size) have descriptions in the input schema. The description only reiterates 'filter by query' without adding new semantics for page or page_size. Thus, the description provides no added value beyond the schema, earning the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists DataCite-registered data repositories and provides concrete examples like Zenodo, Dryad, and Figshare. The verb 'List' and resource 'repositories' make the purpose unambiguous, and it effectively distinguishes itself from siblings such as get_doi or search_dois which focus on DOI lookup rather than repository listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives a clear use case: 'useful for where do I deposit a dataset lookups.' This helps the agent understand the context. However, it does not explicitly state when to avoid this tool or suggest alternatives, leaving some ambiguity in comparison to sibling tools like search_dois that might also be relevant for dataset discovery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pipeworx_feedbackAInspect

Tell the Pipeworx team something is broken, missing, or needs to exist. Use when a tool returns wrong/stale data (bug), when a tool you wish existed isn't in the catalog (feature/data_gap), or when something worked surprisingly well (praise). Describe the issue in terms of Pipeworx tools/packs — don't paste the end-user's prompt. The team reads digests daily and signal directly affects roadmap. Rate-limited to 5 per identifier per day. Free; doesn't count against your tool-call quota.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesbug = something broke or returned wrong data. feature = a new tool or capability you wish existed. data_gap = data Pipeworx does not currently expose. praise = positive note. other = anything else.
contextNoOptional structured context: which tool, pack, or vertical this relates to.
messageYesYour feedback in plain text. Be specific (which tool, what error, what data was missing). 1-2 sentences typical, 2000 chars max.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses key behaviors: the team reads digests daily, feedback directly affects roadmap, rate-limited to 5 per identifier per day, and it's free without counting against tool-call quota. This goes beyond annotations (which are absent) to fully inform the agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise at 6 sentences, each sentence adds unique value. Front-loaded with purpose, followed by usage guidelines, formatting instructions, and behavioral notes. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 params, 1 enum, nested object, no output schema), the description covers all necessary aspects: purpose, when to use, how to format feedback, constraints, and impact. Completes the picture fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds extra value by elaborating on enum meanings (e.g., 'bug = something broke or returned wrong data') and explaining the optional context fields. It also provides guidance on message content (be specific, 1-2 sentences).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool's purpose: reporting issues, missing features, data gaps, or praise to the Pipeworx team. It distinguishes itself from sibling tools (like ask_pipeworx) by focusing on feedback instead of queries or data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use scenarios (bug, feature/data_gap, praise) and instructs users to describe issues in terms of Pipeworx tools/packs, avoiding end-user prompts. Also mentions rate limits and cost, which helps agents decide when to invoke.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a value previously saved via remember, or list all saved keys (omit the key argument). Use to look up context the agent stored earlier — the user's target ticker, an address, prior research notes — without re-deriving it from scratch. Scoped to your identifier (anonymous IP, BYO key hash, or account ID). Pair with remember to save, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully bears the burden. It clearly discloses that the tool is a read operation (retrieve/list) with no side effects, and notes scoping. No destructive behavior is indicated. This is sufficient for a simple tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, no unnecessary words. Front-loaded with the core functionality and paired with usage guidance. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional parameter, no output schema), the description covers all needed aspects: purpose, usage, scoping, and pairing. No gaps are present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already describes the 'key' parameter with 100% coverage. The tool description adds minimal value beyond the schema, only mentioning that omitting it lists all keys (which the schema's 'optional to list all keys' also implies). Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Retrieve a value previously saved via remember, or list all saved keys (omit the key argument).' This provides a specific verb ('retrieve' or 'list') and resource ('value' or 'keys'), and distinguishes from sibling tools like 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives context for when to use the tool: 'Use to look up context the agent stored earlier ... without re-deriving it from scratch.' It also notes scoping and pairing with 'remember' and 'forget'. However, it does not explicitly state when not to use it or mention alternative tools beyond the pair.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recent_changesAInspect

What's new with a company in the last N days/months? Use when a user asks "what's happening with X?", "any updates on Y?", "what changed recently at Acme?", "brief me on what happened with Microsoft this quarter", "news on Apple this month", or you're monitoring for changes. Fans out to SEC EDGAR (recent filings), GDELT (news mentions in window), and USPTO (patents granted) in parallel. since accepts ISO date ("2026-04-01") or relative shorthand ("7d", "30d", "3m", "1y"). Returns structured changes + total_changes count + pipeworx:// citation URIs.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type. Only "company" supported today.
sinceYesWindow start — ISO date ("2026-04-01") or relative ("7d", "30d", "3m", "1y"). Use "30d" or "1m" for typical monitoring.
valueYesTicker (e.g., "AAPL") or zero-padded CIK (e.g., "0000320193").
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description fully carries the burden. It details the fan-out to SEC EDGAR, GDELT, and USPTO in parallel, explains the since parameter format, and specifies the return structure including citation URIs. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph that front-loads the purpose and usage, then provides details. It is efficient without being overly terse. A slightly more structured format could improve readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description adequately explains the return type (structured changes, total_changes count, pipeworx:// URIs). It covers the tool's multi-source fan-out and parameter nuances, making it fit for an agent of moderate complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining the since parameter format with examples ('7d', '30d', '3m', '1y') and clarifying that value can be a ticker or CIK. This extra context elevates the score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with a clear question 'What's new with a company in the last N days/months?' and provides concrete example queries. It effectively distinguishes itself from sibling tools like entity_profile or compare_entities by focusing on recent changes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly lists use cases with example user queries ('what's happening with X?', 'news on Apple this month'). It does not explicitly state when not to use or mention alternatives, but the context is clear enough for an agent to decide.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Save data the agent will need to reuse later — across this conversation or across sessions. Use when you discover something worth carrying forward (a resolved ticker, a target address, a user preference, a research subject) so you don't have to look it up again. Stored as a key-value pair scoped by your identifier. Authenticated users get persistent memory; anonymous sessions retain memory for 24 hours. Pair with recall to retrieve later, forget to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key-value storage, scoping by identifier, and persistence details (24 hours for anonymous, persistent for authenticated). It implies a write operation without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, well-structured, and front-loaded with the core purpose. Every sentence adds information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of a key-value store, the description covers purpose, usage, parameters, and durability. It lacks return value info, but for a write tool, that is acceptable. No output schema exists, so the burden is moderate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters, but the description adds value with examples for key (e.g., 'subject_property') and value (any text), enhancing understanding beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool saves data for later reuse, distinguishes from sibling tools like recall and forget, and provides concrete examples of what to store (resolved ticker, target address, user preference, research subject).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises 'Use when you discover something worth carrying forward' and suggests pairing with recall and forget. It does not state when not to use, but positive guidance is strong and includes scope and durability info.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_entityAInspect

Look up the canonical/official identifier for a company or drug. Use when a user mentions a name and you need the CIK (for SEC), ticker (for stock data), RxCUI (for FDA), or LEI — the ID systems that other tools require as input. Examples: "Apple" → AAPL / CIK 0000320193, "Ozempic" → RxCUI 1991306 + ingredient + brand. Returns IDs plus pipeworx:// citation URIs. Use this BEFORE calling other tools that need official identifiers. Replaces 2–3 lookup calls.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeYesEntity type: "company" or "drug".
valueYesFor company: ticker (AAPL), CIK (0000320193), or name. For drug: brand or generic name (e.g., "ozempic", "metformin").
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full burden. It explains return value (IDs + pipeworx:// URIs) and scope (company/drug ID systems), but could explicitly state it is a read-only lookup. Still transparent enough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose. Every sentence adds value: examples, ordering guidance, scope. Concise for the information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a lookup tool with no output schema. Covers multiple ID systems, return format, and usage workflow. Provides enough context for an agent to know what to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters 100%. Description adds concrete examples (Apple→AAPL, Ozempic→RxCUI) and clarifies accepted input formats for 'value', enhancing schema meaning beyond bare enum/string types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it resolves entities to canonical identifiers (CIK, ticker, RxCUI, LEI) with concrete examples, distinguishing it from sibling tools by noting it replaces 2-3 lookup calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use ('when a user mentions a name and you need the...') and ordering ('Use this BEFORE calling other tools that need official identifiers'). Provides clear context without needing exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_doisAInspect

Search DataCite-registered DOIs. Filter by free-text query, resource type (dataset, software, etc.), year, publisher, or affiliation. Returns DOI, title, creators, publisher, type, year, citation count.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-based page (default 1)
yearNoPublication year
queryNoFree-text query (titles, descriptions, creators)
page_sizeNo1-1000 (default 25)
publisherNoPublisher name filter
affiliationNoCreator affiliation
resource_typeNoDataset | Software | Text | Image | Sound | Other (case-sensitive)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully carries the burden of behavioral disclosure. It accurately describes a read-only search operation and specifies output fields, though it does not mention rate limits or pagination behavior beyond schema defaults.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. It front-loads the core action and filters, then lists return fields, making it easy for an agent to quickly understand the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 7 parameters with full schema descriptions and no output schema, the description adequately covers what the tool does, the available filters, and the return fields, making it complete enough for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions. The tool description reiterates filter categories but adds no new information beyond what the schema already provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches DataCite-registered DOIs and lists specific filters (e.g., query, resource type, year, publisher, affiliation) and return fields (DOI, title, creators, etc.), distinguishing it from the sibling 'get_doi' tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains available filters and what fields are returned, but does not provide guidance on when to use this tool versus alternatives (e.g., 'get_doi' for single DOI lookup) or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate_claimAInspect

Fact-check, verify, validate, or confirm/refute a natural-language factual claim or statement against authoritative sources. Use when an agent needs to check whether something a user said is true ("Is it true that…?", "Was X really…?", "Verify the claim that…", "Validate this statement…"). v1 supports company-financial claims (revenue, net income, cash position for public US companies) via SEC EDGAR + XBRL. Returns a verdict (confirmed / approximately_correct / refuted / inconclusive / unsupported), extracted structured form, actual value with pipeworx:// citation, and percent delta. Replaces 4–6 sequential calls (NL parsing → entity resolution → data lookup → numeric comparison).

ParametersJSON Schema
NameRequiredDescriptionDefault
claimYesNatural-language factual claim, e.g., "Apple's FY2024 revenue was $400 billion" or "Microsoft made about $100B in profit last year".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses return values (verdict types, actual value, citation, delta) and mentions replacing sequential calls, indicating efficiency. It could additionally clarify it is read-only, but overall provides good behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that are front-loaded with purpose and include usage, scope, and return details. No filler words; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description explains the return structure. It covers input parameter fully and gives domain limitations. Lacks info on authentication or rate limits, but given tool simplicity, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with a clear description of the 'claim' parameter. The description adds domain-specific context (supports US public company finances), which is not in the schema, adding meaning beyond the basic parameter description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: fact-check, verify, validate, or confirm/refute a factual claim. It distinguishes from siblings by noting it replaces 4-6 sequential calls, and sibling tools like 'compare_entities' suggest different functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit usage guidance is given: 'Use when an agent needs to check whether something a user said is true' with example phrases. It also scopes to 'company-financial claims' for v1, implying when not to use, but does not directly list alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.