Skip to main content
Glama

Server Details

Crossref MCP — wraps the Crossref REST API (academic papers, free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-crossref
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 8 of 8 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: get_journal, get_work, and search_works handle academic metadata retrieval; remember, recall, and forget manage memory; discover_tools and ask_pipeworx assist with tool discovery and natural language queries. However, ask_pipeworx and discover_tools could cause confusion as both help find tools or data, with ask_pipeworx being more automated and discover_tools requiring manual search.

Naming Consistency3/5

Naming is mixed: get_journal, get_work, search_works, forget, recall, and remember follow a consistent verb_noun or verb pattern, but ask_pipeworx and discover_tools deviate with less predictable naming (ask_pipeworx uses a brand name, discover_tools is verb_noun but differs in style). This creates a readable but inconsistent convention across the set.

Tool Count5/5

With 8 tools, the count is well-scoped for a server combining academic metadata access (Crossref) with memory management and tool discovery. Each tool serves a clear function without bloat, making it manageable for agents to navigate and use effectively in typical workflows.

Completeness4/5

For academic metadata, the tools cover key operations: search_works for discovery, get_work and get_journal for detailed retrieval. Memory tools (remember, recall, forget) provide full CRUD-like functionality. The inclusion of ask_pipeworx and discover_tools adds utility for tool discovery, though there might be minor gaps in advanced academic features like filtering or citation analysis, but core needs are met.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it picks the right tool, fills arguments automatically, and returns results. However, it lacks details on limitations like rate limits, error handling, or data source constraints, which would be helpful for a tool with such broad functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core functionality, uses efficient sentences with zero waste, and includes illustrative examples that enhance understanding without verbosity. Every sentence earns its place by clarifying purpose and usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (natural language querying with automatic tool selection) and lack of annotations or output schema, the description is mostly complete but could benefit from more on behavioral limits or response formats. It adequately covers purpose and usage, though some operational details are omitted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'question' as a natural language string. The description adds minimal value beyond this by reinforcing it's a 'question or request in plain English', but doesn't provide additional syntax or format details, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('ask a question', 'get an answer') and resources ('best available data source'), and distinguishes it from siblings by emphasizing its natural language interface versus needing to browse tools or learn schemas. It provides concrete examples that illustrate its unique function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool ('just describe what you need') and when not to ('no need to browse tools or learn schemas'), offering clear alternatives by implication (use other tools for structured queries). The examples reinforce appropriate usage contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: the tool searches by natural language description and returns ranked results ('most relevant'). However, it doesn't mention limitations like rate limits, authentication requirements, or error conditions that would be important for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise and front-loaded. The first sentence states the core functionality, the second explains the return value, and the third provides crucial usage guidance. Every sentence earns its place with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search functionality with 2 parameters) and lack of annotations/output schema, the description does well by explaining the search behavior and providing strong usage guidance. However, it could better address behavioral aspects like result format details or error handling to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema - it implies the 'query' parameter accepts natural language descriptions but doesn't provide additional syntax or format details. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resources ('Pipeworx tool catalog', 'most relevant tools with names and descriptions'). It distinguishes itself from sibling tools by focusing on tool discovery rather than retrieving specific entities like journals, works, or workspaces.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear context for when to use this tool versus alternatives, including a quantitative threshold (500+ tools) and a specific scenario (finding tools for a task).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states 'Delete' which implies a destructive mutation, but doesn't disclose whether this is permanent, reversible, requires specific permissions, or has side effects. For a destructive operation with zero annotation coverage, this is a significant gap in behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's appropriately sized and front-loaded, directly stating the tool's purpose without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no annotations and no output schema, the description is incomplete. It doesn't explain what happens after deletion (e.g., confirmation, error handling), the scope of 'stored memory', or how this interacts with sibling tools. More context is needed given the mutation nature.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'key' documented as 'Memory key to delete'. The description adds no additional meaning beyond what the schema provides, such as key format or examples. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('a stored memory by key'), providing specific verb+resource pairing. However, it doesn't distinguish this tool from potential siblings like 'recall' or 'remember' that might also manipulate memories, missing explicit differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With siblings like 'recall' (likely for retrieving memories) and 'remember' (likely for storing memories), the description lacks context about when deletion is appropriate or what prerequisites might exist.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_journalAInspect

Get the 5 most recent works from a journal by ISSN (e.g., "2041-1723"). Returns titles, authors, DOIs, and publication dates.

ParametersJSON Schema
NameRequiredDescriptionDefault
issnYesJournal ISSN (e.g., "1476-4687" for Nature)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return format (title, authors, DOI, publication date) and the limit of 5 most recent works, which is useful. However, it lacks details on error handling, rate limits, authentication needs, or whether it's a read-only operation, leaving gaps for a tool with no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, input, and output without any wasted words. It is front-loaded with the core functionality, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is reasonably complete for basic use. It covers what the tool does and what it returns. However, without annotations or an output schema, it lacks details on behavioral traits like error conditions or response structure, which could hinder agent reliability.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already documents the 'issn' parameter with an example. The description adds context by specifying it's for a journal and that it retrieves recent works, but does not provide additional semantic details beyond what the schema offers, such as format constraints or edge cases.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), resource ('5 most recent works published in a journal'), and scope ('by its ISSN'), with distinct output details. It differentiates from sibling tools like 'get_work' (likely for individual works) and 'search_works' (likely broader searches) by focusing on journal-specific recent publications.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing recent works from a specific journal via ISSN, but it does not explicitly state when to use this tool versus alternatives like 'search_works' or 'get_work'. No exclusions or prerequisites are mentioned, leaving some ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_workAInspect

Get full metadata for a publication by DOI (e.g., "10.1038/nature12373"). Returns title, authors, abstract, journal, publisher, citations, and subjects.

ParametersJSON Schema
NameRequiredDescriptionDefault
doiYesDOI of the work (e.g., "10.1038/nature12373")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the return data (title, authors, etc.) and implies a read-only operation, but lacks details on error handling, rate limits, authentication needs, or performance characteristics. It adds basic context but misses deeper behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first clause, followed by a concise list of return values. It uses two efficient sentences with zero waste, making it easy to scan and understand quickly without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single required parameter) and no output schema, the description adequately covers the purpose and return data. However, it lacks information on error cases (e.g., invalid DOI), response format details, or integration with sibling tools, leaving some contextual gaps for an agent to handle edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'doi' fully documented in the schema. The description adds no additional parameter semantics beyond what the schema provides (e.g., no extra examples or constraints), so it meets the baseline for high schema coverage without compensating further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get full metadata'), resource ('academic work'), and identifier ('by its DOI'), distinguishing it from sibling tools like 'get_journal' (journal-level) and 'search_works' (search multiple). It precisely defines what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates usage context by specifying 'by its DOI', suggesting this tool is for retrieving metadata when a DOI is known. However, it does not explicitly state when to use it versus alternatives like 'search_works' (for broader searches) or 'get_journal', nor does it provide exclusions or prerequisites, leaving some guidance gaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's dual behavior (retrieve by key or list all) and persistence across sessions ('saved earlier in the session or in previous sessions'). However, it doesn't mention potential limitations like maximum memory size, retrieval time, or error conditions when keys don't exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first explains functionality, the second provides usage context. There is zero wasted language, and the most important information (the dual retrieve/list behavior) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (dual functionality, session persistence) and no annotations or output schema, the description does well by explaining both retrieval modes and cross-session persistence. However, it doesn't describe the format of returned memories or potential error cases, leaving some gaps for a tool that handles stored data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so the baseline is 3. The description adds meaningful context by explaining the semantic effect of omitting the key parameter ('omit to list all keys'), which clarifies the tool's dual functionality beyond what the schema alone provides. This elevates the score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations. The description goes beyond the name 'recall' to explain what is being recalled.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It also explains when to omit the key parameter ('omit key to list all keys'), which directly addresses the tool's dual functionality. This gives clear context for when to use this tool versus alternatives like 'search_works' or 'get_journal'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool performs a write operation ('Store'), specifies persistence characteristics ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and implies session-scoped storage. It does not cover aspects like error conditions or performance limits, but adds substantial value beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core purpose with examples, and the second adds critical behavioral context about persistence. Every phrase earns its place, with no redundant or vague language, making it front-loaded and highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, 100% schema coverage, no output schema, and no annotations, the description is largely complete. It covers the tool's purpose, usage context, and key behavioral traits (persistence rules). However, it lacks details on return values or error handling, which would be beneficial given the absence of an output schema, slightly limiting completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('key' and 'value') well-documented in the schema. The description does not add any parameter-specific details beyond what the schema provides (e.g., it doesn't explain key constraints or value formatting). Given the high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (likely retrieval) and 'forget' (likely deletion). It provides concrete examples of what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), providing clear context for its application. However, it does not mention when not to use it or name specific alternatives among siblings (e.g., how it differs from 'recall' or 'forget'), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_worksAInspect

Search for academic papers, books, and datasets by keyword. Returns titles, authors, journals, DOIs, and citation counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (1-100, default 10)
queryYesSearch query (e.g., "climate change machine learning")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the return fields (title, authors, etc.) but doesn't disclose important behavioral traits like rate limits, authentication needs, pagination, error handling, or whether this is a read-only operation. The description adds minimal behavioral context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys purpose, method, and return values. Every element earns its place with zero waste, making it appropriately sized and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides basic functionality and return fields but lacks completeness for a search tool. It doesn't cover error cases, result ordering, or detailed behavioral context, leaving gaps in understanding how the tool behaves in practice.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema (e.g., no examples of query syntax beyond the schema's example, no clarification on 'limit' behavior). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search academic works'), the resource ('Crossref index'), the method ('by keyword'), and distinguishes from sibling tools by specifying it's for searching rather than retrieving specific items like 'get_journal' or 'get_work'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (searching academic works by keyword) but doesn't explicitly state when to use this tool versus the sibling tools 'get_journal' or 'get_work'. No guidance on exclusions or alternatives is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.