Skip to main content
Glama

Server Details

Multi-engine scholarly research server for search, traversal, full text, and reading lists.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
laibniz/scholarfetch
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

12 tools
scholarfetch_abstractAInspect

Read the best abstract available for a paper. Use with a DOI or with author_name + candidate_index + paper_index after author_papers. If you pass engines, use a comma-separated subset of: elsevier, openalex, crossref, arxiv, europepmc, springer, semanticscholar.

ParametersJSON Schema
NameRequiredDescriptionDefault
doiNo
enginesNo
author_nameNo
paper_indexNo
candidate_indexNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Lists specific engines used (elsevier, openalex, etc.) which adds context, but fails to explain what 'best' means (selection algorithm), failure modes, or return behavior when abstracts are unavailable. No safety/scope disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states purpose, second combines usage guidelines with parameter syntax. Every clause earns its place. Appropriate density for the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong coverage given output schema exists (exempting return value documentation). Workflow dependencies and parameter semantics are complete. Minor gap: no mention of not-found behavior or rate limiting considerations for the multi-engine search.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Critical compensation for 0% schema coverage: explains doi via usage pattern, documents author_name/candidate_index/paper_index through workflow example, and explicitly details engines parameter format ('comma-separated subset') plus complete enum values. All 5 parameters are semantically covered.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Read') + resource ('abstract') + scope ('best available'). Effectively distinguishes from sibling 'scholarfetch_article_text' (full text vs abstract) and 'scholarfetch_author_papers' (list vs fetch).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: states two input patterns (DOI vs author_name + indices) and critical workflow prerequisite ('after author_papers'). Clearly defines dependency chain with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_article_textAInspect

Read full paper text when machine-readable content is recoverable. Use with a DOI or with author_name + candidate_index + paper_index. Uses Elsevier first, then open-access fallbacks such as Springer OA, Europe PMC, arXiv PDF, and generic PDF URLs when text is recoverable. If you pass engines, use a comma-separated subset of: elsevier, openalex, crossref, arxiv, europepmc, springer, semanticscholar.

ParametersJSON Schema
NameRequiredDescriptionDefault
doiNo
enginesNo
author_nameNo
paper_indexNo
candidate_indexNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses the multi-source fallback chain (Elsevier → Springer OA → Europe PMC → arXiv) and the conditional limitation ('when machine-readable content is recoverable').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three information-dense sentences with zero waste: purpose upfront, followed by usage patterns, then implementation details (fallbacks) and parameter syntax. Every clause delivers actionable guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description appropriately omits return value details. It fully covers invocation logic and backend behavior. Minor gap: could clarify that candidate_index/paper_index likely reference results from sibling 'scholarfetch_author_candidates' and 'scholarfetch_author_papers' tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates effectively for 0% schema coverage by documenting all five parameters: explains the two mutually exclusive identifier patterns (DOI vs author+indices) and provides exhaustive valid values for the 'engines' parameter (comma-separated subset of seven specific sources).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Read') and resource ('full paper text'), clearly distinguishing it from sibling tools like 'scholarfetch_abstract' by emphasizing 'full paper' versus abstracts or metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly defines valid invocation patterns ('Use with a DOI or with author_name + candidate_index + paper_index'), guiding the agent on required parameter combinations. Lacks explicit 'when not to use' statements, but implies scope through the parameter constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_author_candidatesAInspect

Disambiguate a human author name into ranked identity candidates. Use this before scholarfetch_author_papers when the name is ambiguous and you need a stable candidate_index. If you pass engines, it must include openalex.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
limitNo
enginesNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With zero annotations, description carries full disclosure burden. Adds valuable constraints: output is 'ranked', `candidate_index` is 'stable', and critical validation rule that `engines` must include 'openalex'. However, missing safety classification (read-only/destructive), error behavior on no matches, and available engine options beyond the OpenAlex constraint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose (sentence 1), workflow guidance (sentence 2), parameter constraint (sentence 3). Front-loaded with clear action verb. No redundant fluff despite handling complex workflow dependencies.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive workflow context and critical engine constraint provided. Output schema exists so return values need no description. Minor gap: `limit` parameter receives no documentation despite having a default value that affects behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, so description must compensate. It successfully adds crucial semantic constraint for `engines` (must include 'openalex') and clarifies `name` expects a 'human author name'. However, `limit` parameter is completely undocumented in both schema and description, leaving the agent to infer its pagination purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Disambiguate' with resource 'human author name' and output 'ranked identity candidates'. Explicitly distinguishes from sibling tool `scholarfetch_author_papers` by explaining this produces the `candidate_index` needed for that subsequent call.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit workflow guidance: 'Use this before `scholarfetch_author_papers` when the name is ambiguous and you need a stable `candidate_index`'. Clearly defines when-to-use (ambiguous names) and the prerequisite relationship to the sibling tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_author_papersAInspect

Expand one author into a deduplicated paper list. This is the main author->paper traversal tool and supports research filters. Use author_id when you already know the exact author, or author_name plus candidate_index after scholarfetch_author_candidates. Supported comma-separated filters: year>=YYYY, year<=YYYY, year=YYYY, has:abstract, has:doi, has:pdf, venue:, title:, doi:. If you pass engines, it must include openalex.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
enginesNo
filtersNo
author_idNo
author_nameNo
candidate_indexNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Good: With no annotations, description carries the burden. It discloses 'deduplicated' output (key behavior) and traversal nature. Mentions engine constraint. Could improve by mentioning read-only/idempotent nature or pagination behavior, but covers the critical behavioral quirks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfect: Five sentences, each high-value. Front-loaded purpose, then scope, then workflow guidance, then syntax specification, then constraint. No wasted words despite high information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete: Addresses all 6 parameters despite 0% schema coverage (documents filters extensively, author lookup modes, engine constraint; limit has sensible default). Output schema exists so return values needn't be described. Covers the complex filter DSL fully.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation: With 0% schema description coverage, the description provides detailed filter syntax (year>=YYYY, has:abstract, etc.), explains the relationship between author_name/candidate_index, and clarifies the engine constraint. Far exceeds baseline for undocumented schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent: 'Expand one author into a deduplicated paper list' provides specific verb (expand) + resource (papers) + key trait (deduplicated). 'Main author->paper traversal tool' distinguishes it from sibling search tools like `scholarfetch_search`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent: Explicitly states when to use `author_id` vs `author_name` + `candidate_index`, and mandates workflow 'after `scholarfetch_author_candidates`' (names sibling). Also notes requirement that `engines` must include 'openalex' if passed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_doi_lookupAInspect

Enrich one known DOI with metadata, reading links, and full-text availability signals. If you pass engines, use a comma-separated subset of: elsevier, openalex, crossref, arxiv, europepmc, springer, semanticscholar.

ParametersJSON Schema
NameRequiredDescriptionDefault
doiYes
enginesNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full behavioral disclosure burden. It adds valuable context by listing specific data sources (elsevier, openalex, etc.) and output types (full-text availability signals). However, it omits safety profile (idempotent?), error behavior for invalid DOIs, and whether results are cached or real-time.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first sentence front-loads purpose and outputs; second sentence provides precise parameter guidance. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown but indicated), the description appropriately avoids detailing return values. For a two-parameter lookup tool, it adequately documents the non-obvious parameter (engines). Could improve by mentioning DOI format expectations or error handling for invalid identifiers.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates significantly by documenting the `engines` parameter format (comma-separated) and valid values (explicit list of 7 sources). The `doi` parameter semantics are implied by the tool's purpose but not explicitly described, preventing a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'enriches one known DOI with metadata, reading links, and full-text availability signals,' providing specific verb (enrich), resource (DOI), and output scope. It distinguishes from siblings like scholarfetch_search (which queries without known DOI) and scholarfetch_article_text (which extracts full text rather than metadata/links).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'one known DOI,' suggesting use when a specific DOI is already identified versus searching. However, it lacks explicit when-to-use guidance or named alternatives (e.g., 'use this instead of search when you have an exact DOI').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_referencesAInspect

Expand a paper into its references. Use with a DOI or with author_name + candidate_index + paper_index. This is the main edge-expansion tool for traversing the literature graph. If you pass engines, use a comma-separated subset of: elsevier, openalex, crossref, arxiv, europepmc, springer, semanticscholar.

ParametersJSON Schema
NameRequiredDescriptionDefault
doiNo
enginesNo
author_nameNo
paper_indexNo
candidate_indexNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds valuable context about 'traversing the literature graph' and discloses the comma-separated format for the engines parameter. However, it omits critical behavioral traits like read-only status, rate limits, idempotency, or error handling (e.g., what happens if a DOI is not found).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, zero waste: purpose declaration, usage patterns, positioning context, and parameter formatting details. Information is front-loaded with the core function ('Expand a paper'), followed by implementation specifics. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 5-parameter tool with zero schema coverage and an output schema present, the description adequately covers the two primary usage modes and engine configuration. The existence of an output schema excuses it from detailing return values. Minor gap in not explaining index semantics or default behaviors for optional parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage, the description compensates well by explaining the relationship between author_name, candidate_index, and paper_index as a grouped pattern, and lists valid engine values. It loses one point for not explaining what the indices actually reference (e.g., which list they index into) or clarifying mutual exclusivity between the DOI and author patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Expand[s] a paper into its references' with a specific verb and resource. It distinguishes itself from siblings by positioning itself as 'the main edge-expansion tool for traversing the literature graph,' clearly differentiating it from abstract retrieval or search tools in the same suite.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit parameter grouping patterns ('Use with a DOI or with author_name + candidate_index + paper_index'), guiding the agent on required input combinations. However, it lacks explicit 'when-not-to-use' guidance or named alternatives (e.g., not mentioning when to use scholarfetch_abstract instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_saved_addAInspect

Add one paper to a named in-memory reading list on the MCP server. Best input is paper_json copied from another ScholarFetch tool result, but DOI, query+result_index, or author_name+candidate_index+paper_index also work. Reuse the same collection name across calls to keep one research session together.

ParametersJSON Schema
NameRequiredDescriptionDefault
doiNo
queryNo
enginesNo
collectionNodefault
paper_jsonNo
author_nameNo
paper_indexNo
result_indexNo
candidate_indexNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It successfully communicates the 'in-memory' nature (volatile/temporary storage) and session-scoped persistence ('research session together'), but omits error handling (duplicate additions, missing papers), side effects, or output details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first front-loads the core action and input options; the second provides essential workflow guidance on collection naming. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 9 undocumented parameters and existence of an output schema, the description successfully covers the primary use case, input methodologies, and session management concept. It appropriately omits return value details (covered by output schema) but could strengthen edge case coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description compensates effectively by explaining input patterns: paper_json sourcing, DOI standalone usage, query+result_index pairing, and author_name+candidate_index+paper_index grouping. It also explains the collection parameter's session role. Only 'engines' remains completely undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action ('Add one paper'), target resource ('named in-memory reading list'), and location ('on the MCP server'). It effectively distinguishes this tool from search/lookup siblings by specifying the 'saved' reading list functionality and session-based storage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on input alternatives ('Best input is paper_json... but DOI, query+result_index... also work') and workflow patterns ('Reuse the same collection name... to keep one research session together'). It lacks explicit 'when not to use' guidance comparing it to saved_remove or saved_clear siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_saved_clearAInspect

Clear all papers from a named in-memory reading list. Useful when restarting a research branch.

ParametersJSON Schema
NameRequiredDescriptionDefault
collectionNodefault

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds critical behavioral context 'in-memory' (implying session-scoped storage) and 'clear all' (indicating bulk destructive operation). However, with no annotations provided, it omits reversibility warnings, side effects, or output format details despite the existence of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states the action, second provides usage context. Appropriately front-loaded and sized for a single-parameter tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple destructive tool with one optional parameter, given the output schema handles return value documentation. However, gaps remain: no annotation coverage, 0% schema coverage, and missing explicit parameter documentation lower it from a 4.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage for the 'collection' parameter. Description mentions 'named' reading list, hinting at the parameter's purpose, but fails to explicitly document the parameter, its default value ('default'), or the fact that it is optional. Insufficient compensation for zero schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Clear' with resource 'papers from a named in-memory reading list.' Explicit 'all' distinguishes from sibling scholarfetch_saved_remove (which likely removes specific items), and 'in-memory reading list' scopes it to the saved_* tool family.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear contextual guidance ('Useful when restarting a research branch') indicating when to use the tool. Lacks explicit 'when not to use' or named alternatives, but the contextual signal is strong enough for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_saved_exportAInspect

Export the current reading list as citations, abstracts, BibTeX, or an aggregated full-text corpus. Valid format values: citations, abstracts, bib, fulltext. Valid style values when format=citations: harvard, apa, ieee. Use include_references=true with format=fulltext when you want a richer downstream synthesis corpus.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNoharvard
formatNocitations
enginesNo
collectionNodefault
include_referencesNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that fulltext format produces an 'aggregated' corpus and hints at data richness with include_references, but omits safety information (destructive potential, auth requirements, rate limits) that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: purpose statement, format values, style values, and advanced usage tip. Information is front-loaded and structured logically from basic to conditional usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters and an output schema (which excuses return value documentation), the description is incomplete due to unexplained 'engines' and 'collection' parameters. The valid value documentation for the other three parameters is thorough, but gaps remain for a tool with this parameter count.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. The description compensates by documenting valid enum-like values for 'format' (4 options) and 'style' (3 options), and explains the conditional behavior of 'include_references'. However, it completely omits semantics for 'engines' and 'collection' parameters, leaving 40% of the interface unexplained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Export') and clear resource ('the current reading list'), explicitly distinguishing this from sibling tools like scholarfetch_saved_add or scholarfetch_saved_list by focusing on export functionality rather than CRUD operations on the list itself.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit valid values for format and style parameters, and includes conditional guidance ('when `format=citations`', 'when you want a richer downstream synthesis corpus'). Lacks explicit 'when not to use' or named alternatives, but the value constraints effectively guide proper invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_saved_listAInspect

List all papers currently saved in a named in-memory reading list. Use this to inspect the working set before exporting or removing items.

ParametersJSON Schema
NameRequiredDescriptionDefault
collectionNodefault

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds critical behavioral context that the reading list is 'in-memory' (implying non-persistent storage) and 'named' (indicating support for multiple collections). Does not mention output format, but output schema exists to cover this gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence front-loads the core action (List), second sentence provides workflow context. Every word earns its place; no redundancy or generic filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple single-parameter tool with an output schema. Covers purpose, usage timing, and key behavioral trait (in-memory storage). Does not explicitly mention the default collection value, but this is discoverable in the schema. Output schema handles return value documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage (no parameter descriptions), so description must compensate. Mentions 'named' reading list, loosely implying the collection parameter specifies the list name, but does not explicitly document parameter semantics or the default value of 'default'. Provides partial compensation but not full documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('List') and resource ('papers currently saved in a named in-memory reading list'). Explicitly distinguishes from sibling modification tools like scholarfetch_saved_add, scholarfetch_saved_remove, and scholarfetch_saved_export by specifying this is for inspection/reading only.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use this to inspect the working set before exporting or removing items.' This provides clear workflow sequencing guidance and references related sibling operations (exporting and removing) without naming them explicitly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scholarfetch_saved_removeAInspect

Remove one paper from a named in-memory reading list by DOI or exact title.

ParametersJSON Schema
NameRequiredDescriptionDefault
doiNo
titleNo
collectionNodefault

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses 'in-memory' persistence (critical behavioral trait) and 'named' collections, but omits error handling (what if DOI/title not found?), idempotency, or return value semantics. Output schema exists, reducing burden for return values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with action ('Remove'), followed by scope ('one paper'), resource ('named in-memory reading list'), and parameter hint ('by DOI or exact title'). Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description appropriately focuses on operation semantics rather than return values. Adequately covers the 3 parameters despite zero schema coverage. Minor gap: doesn't mention error conditions or the fact that 'exact title' requires precise matching.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates effectively: 'named...reading list' maps to 'collection', 'by DOI or exact title' explains the two identifier parameters. However, it doesn't clarify that these are mutually exclusive identifiers or that 'exact title' implies case-sensitive/precise matching.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb (remove), clear resource (paper from reading list), and scope (one paper). It effectively distinguishes from siblings: contrasts with 'saved_add' (addition), 'saved_clear' (bulk deletion), and 'saved_list' (reading), making the selection criteria unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies context through 'named in-memory reading list' (suggests prior saving is required), but lacks explicit when-to-use guidance or comparison to alternatives like 'saved_clear' for bulk removal. Agent must infer this is for targeted single-item removal versus bulk operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.