Skip to main content
Glama

Server Details

PubMed MCP — wraps the NCBI E-utilities API (biomedical literature, free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-pubmed
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 7 of 7 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes: discover_tools is for tool discovery, forget/recall/remember handle memory operations, and get_abstract/get_summary/search_pubmed are for PubMed queries. However, get_abstract and get_summary could be slightly confused as both retrieve article details, though they serve different levels of detail (full abstract vs. metadata summary).

Naming Consistency4/5

Tools follow a consistent verb_noun pattern (e.g., discover_tools, get_abstract, search_pubmed), with all using snake_case. The only minor deviation is 'forget' and 'recall' being single verbs without nouns, but they fit the memory theme and are still clear.

Tool Count5/5

With 7 tools, the count is well-scoped for a server focused on PubMed search and memory management. Each tool has a clear role, and there's no bloat or missing essential functions, making it efficient for agents to navigate.

Completeness4/5

The server covers core PubMed operations (search, get metadata, get abstract) and memory functions (store, retrieve, delete), with no major gaps. A minor gap is the lack of advanced PubMed features like filtering by date or journal, but agents can work around this with the existing search capabilities.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains that Pipeworx 'picks the right tool, fills the arguments, and returns the result,' which adds useful context about automation and abstraction. However, it lacks details on potential limitations, error handling, data sources, or response formats, leaving gaps in behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by explanatory details and concrete examples. Every sentence earns its place by clarifying functionality or illustrating usage, with no redundant or vague language. It efficiently communicates the tool's value in a compact format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (natural language querying with automated tool selection) and lack of annotations or output schema, the description is moderately complete. It covers the high-level workflow and use cases but omits details on result types, error conditions, or data source limitations. For a tool with no structured output documentation, more behavioral context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'question' parameter well-documented as 'Your question or request in natural language.' The description reinforces this by emphasizing 'plain English' and providing examples, but does not add significant semantic value beyond what the schema already states. The baseline score of 3 is appropriate given the high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'), distinguishing it from sibling tools like search_pubmed or get_summary by emphasizing natural language input without tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: for asking questions in plain English without needing to browse tools or learn schemas. It includes examples like 'What is the US trade deficit with China?' to illustrate appropriate use cases. However, it does not explicitly state when not to use it or name specific alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a search operation (implied read-only), returns relevant tools with names and descriptions, and emphasizes it should be called first in large tool environments. However, it doesn't mention potential limitations like rate limits, authentication needs, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each serve distinct purposes: the first explains what the tool does, the second provides crucial usage guidance. Every word earns its place, and the information is front-loaded with no wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search function with 2 parameters), no annotations, and no output schema, the description does well by explaining the purpose, usage context, and behavioral aspects. However, it doesn't describe the return format (beyond mentioning 'names and descriptions') or potential limitations, leaving some gaps for a tool without structured output documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain query formatting or limit implications). The baseline score of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search the Pipeworx tool catalog') and resource ('tool catalog'), and explicitly distinguishes it from siblings by emphasizing its role for discovery among 500+ tools. It provides a concrete action and target resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), providing clear context and a specific alternative scenario (using it as an initial step rather than alternatives). It gives direct guidance on optimal usage timing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a deletion operation, implying it's destructive and irreversible, which is critical context. However, it lacks details on permissions needed, error handling (e.g., what happens if the key doesn't exist), or side effects, leaving significant gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Delete') and resource. There is zero waste—every word earns its place, making it easy to parse quickly without unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a destructive tool with no annotations and no output schema, the description is incomplete. It doesn't explain what 'stored memory' entails in this context, the implications of deletion, or what the response looks like (e.g., success confirmation or error). Given the complexity of a mutation operation, more behavioral context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'key' parameter documented as 'Memory key to delete'. The description adds no additional meaning beyond this, simply restating 'by key'. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't compensate with extra context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete') and resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the verb 'Delete' distinguishes it as a destructive operation versus retrieval or creation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory key), when not to use it, or how it relates to sibling tools like 'recall' (which likely retrieves memories) or 'remember' (which likely creates them).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_abstractAInspect

Get full abstract text by PubMed ID with structured sections (background, methods, results, conclusions) when available.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesA single PubMed ID (e.g., "33579999")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return format ('structured abstract with section labels when available'), which is valuable behavioral context, but does not mention potential errors, rate limits, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two clear sentences that efficiently convey the tool's purpose and return behavior without any wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no annotations, no output schema), the description is reasonably complete. It explains what the tool does and what it returns, though it could benefit from more behavioral details like error handling or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'id'. The description adds marginal value by reinforcing that it's for a 'single PubMed article' but does not provide additional syntax or format details beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Get') and resource ('full abstract text for a single PubMed article'), and distinguishes it from sibling tools by specifying it retrieves abstracts rather than summaries or search results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('by its PubMed ID'), but does not explicitly state when not to use it or name alternatives like the sibling tools get_summary or search_pubmed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_summaryAInspect

Get article metadata by PubMed ID. Returns title, authors, journal, publication date, and DOI. Batch multiple IDs in one request.

ParametersJSON Schema
NameRequiredDescriptionDefault
idsYesComma-separated PubMed IDs (e.g., "33579999,34567890")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the tool's behavior by specifying what it returns (title, authors, journal, publication date, DOI) and the input format, but does not mention potential limitations like rate limits, error conditions, or authentication needs. It adequately describes the core operation without contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific return details, all in two efficient sentences with zero wasted words. Every sentence adds essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, no annotations, no output schema), the description is mostly complete for a read-only retrieval tool. It clearly states the purpose, input, and return fields. However, without an output schema, it could benefit from more detail on response structure or error handling, but the essentials are covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents the single parameter 'ids'. The description adds no additional parameter semantics beyond what the schema provides (e.g., it doesn't clarify format beyond 'comma-separated' or discuss validation), meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get metadata summaries'), target resource ('PubMed articles'), and scope ('by their PubMed IDs'), distinguishing it from sibling tools like 'get_abstract' (which likely returns full abstracts) and 'search_pubmed' (which searches rather than retrieves by ID).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (retrieving metadata for known PubMed IDs) but does not explicitly state when to use this tool versus alternatives like 'get_abstract' or 'search_pubmed'. It provides clear input requirements but lacks explicit exclusions or comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that memories can be retrieved from current or previous sessions, implying persistence across sessions. However, it doesn't mention error handling (e.g., what happens if key doesn't exist), performance traits, or rate limits, leaving behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core functionality, and the second adds usage context. Every sentence earns its place with no wasted words, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 optional parameter, no output schema, no annotations), the description is adequate but has gaps. It covers purpose and basic usage but lacks details on return format, error cases, or how it interacts with siblings like 'remember'. It's minimally viable but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the baseline is 3. The description adds meaningful context: it explains that omitting the key lists all memories, which clarifies the parameter's optional nature and its effect on behavior, providing value beyond the schema's basic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retrieve a previously stored memory by key, or list all stored memories (omit key).' It specifies the verb ('retrieve'/'list') and resource ('memory'), but doesn't explicitly differentiate from sibling tools like 'remember' or 'forget' beyond the retrieval focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use it: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It also explains the key parameter's role: 'omit key' to list all. However, it doesn't explicitly state when not to use it or name alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a storage operation (implied mutation), specifies persistence differences based on authentication ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and hints at session scope. However, it doesn't cover potential errors, rate limits, or exact response format, leaving some gaps for a tool with mutation implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose, usage, and behavioral details without waste. Each sentence adds distinct value: the first defines the tool's function and use cases, and the second clarifies persistence behavior, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (storage with authentication-based persistence), no annotations, and no output schema, the description does a good job covering key aspects like purpose, usage, and behavioral traits. However, it lacks details on return values or error handling, which would be beneficial since there's no output schema. It's mostly complete but has minor gaps in output expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description adds minimal value beyond the schema by implying the parameters are used for storage but doesn't provide additional syntax, constraints, or usage nuances. This meets the baseline of 3 when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (likely for retrieval) and 'forget' (likely for deletion). It provides concrete examples of what can be stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose explicit and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly state when not to use it or name alternatives. For example, it doesn't specify if 'recall' should be used for retrieval instead, though this is implied. The guidance is helpful but lacks explicit exclusions or named sibling alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_pubmedAInspect

Search PubMed biomedical literature by keyword, author, or MeSH term (e.g., "cancer immunotherapy", "author:Smith J"). Returns PubMed IDs for fetching full details.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (1-100, default 10)
queryYesSearch query (e.g., "CRISPR cancer therapy", "Smith J[Author]", "COVID-19[MeSH]")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns a list of PubMed IDs, which implies a read-only operation, but doesn't specify other behaviors like rate limits, authentication requirements, or error handling. The description adds some context about the output format but lacks details on pagination or result ordering.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first sentence states the purpose and search methods, and the second explains the output and connection to sibling tools. It is appropriately sized, front-loaded with key information, and every sentence earns its place by adding distinct value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search operation with 2 parameters), no annotations, and no output schema, the description is reasonably complete. It covers the purpose, usage context with siblings, and output format. However, it could improve by addressing potential limitations like result ordering or default behaviors beyond the limit parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters (query and limit). The description adds minimal value beyond the schema by mentioning search types (keyword, author, MeSH) that align with the query parameter examples, but doesn't provide additional syntax or format details. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search'), resource ('PubMed biomedical literature database'), and search methods ('by keyword, author, or MeSH term'). It distinguishes this tool from its siblings by explaining that it returns PubMed IDs that can be used with get_summary or get_abstract, establishing its role in a workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by mentioning the types of searches (keyword, author, MeSH) and explicitly naming sibling tools (get_summary, get_abstract) that should be used after obtaining IDs. However, it lacks explicit guidance on when NOT to use this tool or alternatives for different search needs, such as filtering by date or journal.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.