ScholarFetch
Server Details
Multi-engine scholarly research server for search, traversal, full text, and reading lists.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- laibniz/scholarfetch
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
12 toolsscholarfetch_abstractAInspect
Read the best abstract available for a paper. Use with a DOI or with author_name + candidate_index + paper_index after author_papers. If you pass engines, use a comma-separated subset of: elsevier, openalex, crossref, arxiv, europepmc, springer, semanticscholar.
| Name | Required | Description | Default |
|---|---|---|---|
| doi | No | ||
| engines | No | ||
| author_name | No | ||
| paper_index | No | ||
| candidate_index | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Lists specific engines used (elsevier, openalex, etc.) which adds context, but fails to explain what 'best' means (selection algorithm), failure modes, or return behavior when abstracts are unavailable. No safety/scope disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states purpose, second combines usage guidelines with parameter syntax. Every clause earns its place. Appropriate density for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong coverage given output schema exists (exempting return value documentation). Workflow dependencies and parameter semantics are complete. Minor gap: no mention of not-found behavior or rate limiting considerations for the multi-engine search.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Critical compensation for 0% schema coverage: explains doi via usage pattern, documents author_name/candidate_index/paper_index through workflow example, and explicitly details engines parameter format ('comma-separated subset') plus complete enum values. All 5 parameters are semantically covered.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Read') + resource ('abstract') + scope ('best available'). Effectively distinguishes from sibling 'scholarfetch_article_text' (full text vs abstract) and 'scholarfetch_author_papers' (list vs fetch).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: states two input patterns (DOI vs author_name + indices) and critical workflow prerequisite ('after author_papers'). Clearly defines dependency chain with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_article_textAInspect
Read full paper text when machine-readable content is recoverable. Use with a DOI or with author_name + candidate_index + paper_index. Uses Elsevier first, then open-access fallbacks such as Springer OA, Europe PMC, arXiv PDF, and generic PDF URLs when text is recoverable. If you pass engines, use a comma-separated subset of: elsevier, openalex, crossref, arxiv, europepmc, springer, semanticscholar.
| Name | Required | Description | Default |
|---|---|---|---|
| doi | No | ||
| engines | No | ||
| author_name | No | ||
| paper_index | No | ||
| candidate_index | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses the multi-source fallback chain (Elsevier → Springer OA → Europe PMC → arXiv) and the conditional limitation ('when machine-readable content is recoverable').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three information-dense sentences with zero waste: purpose upfront, followed by usage patterns, then implementation details (fallbacks) and parameter syntax. Every clause delivers actionable guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description appropriately omits return value details. It fully covers invocation logic and backend behavior. Minor gap: could clarify that candidate_index/paper_index likely reference results from sibling 'scholarfetch_author_candidates' and 'scholarfetch_author_papers' tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Compensates effectively for 0% schema coverage by documenting all five parameters: explains the two mutually exclusive identifier patterns (DOI vs author+indices) and provides exhaustive valid values for the 'engines' parameter (comma-separated subset of seven specific sources).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Read') and resource ('full paper text'), clearly distinguishing it from sibling tools like 'scholarfetch_abstract' by emphasizing 'full paper' versus abstracts or metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly defines valid invocation patterns ('Use with a DOI or with author_name + candidate_index + paper_index'), guiding the agent on required parameter combinations. Lacks explicit 'when not to use' statements, but implies scope through the parameter constraints.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_author_candidatesAInspect
Disambiguate a human author name into ranked identity candidates. Use this before scholarfetch_author_papers when the name is ambiguous and you need a stable candidate_index. If you pass engines, it must include openalex.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| limit | No | ||
| engines | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With zero annotations, description carries full disclosure burden. Adds valuable constraints: output is 'ranked', `candidate_index` is 'stable', and critical validation rule that `engines` must include 'openalex'. However, missing safety classification (read-only/destructive), error behavior on no matches, and available engine options beyond the OpenAlex constraint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose (sentence 1), workflow guidance (sentence 2), parameter constraint (sentence 3). Front-loaded with clear action verb. No redundant fluff despite handling complex workflow dependencies.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive workflow context and critical engine constraint provided. Output schema exists so return values need no description. Minor gap: `limit` parameter receives no documentation despite having a default value that affects behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, so description must compensate. It successfully adds crucial semantic constraint for `engines` (must include 'openalex') and clarifies `name` expects a 'human author name'. However, `limit` parameter is completely undocumented in both schema and description, leaving the agent to infer its pagination purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Disambiguate' with resource 'human author name' and output 'ranked identity candidates'. Explicitly distinguishes from sibling tool `scholarfetch_author_papers` by explaining this produces the `candidate_index` needed for that subsequent call.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit workflow guidance: 'Use this before `scholarfetch_author_papers` when the name is ambiguous and you need a stable `candidate_index`'. Clearly defines when-to-use (ambiguous names) and the prerequisite relationship to the sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_author_papersAInspect
Expand one author into a deduplicated paper list. This is the main author->paper traversal tool and supports research filters. Use author_id when you already know the exact author, or author_name plus candidate_index after scholarfetch_author_candidates. Supported comma-separated filters: year>=YYYY, year<=YYYY, year=YYYY, has:abstract, has:doi, has:pdf, venue:, title:, doi:. If you pass engines, it must include openalex.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| engines | No | ||
| filters | No | ||
| author_id | No | ||
| author_name | No | ||
| candidate_index | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Good: With no annotations, description carries the burden. It discloses 'deduplicated' output (key behavior) and traversal nature. Mentions engine constraint. Could improve by mentioning read-only/idempotent nature or pagination behavior, but covers the critical behavioral quirks.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfect: Five sentences, each high-value. Front-loaded purpose, then scope, then workflow guidance, then syntax specification, then constraint. No wasted words despite high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete: Addresses all 6 parameters despite 0% schema coverage (documents filters extensively, author lookup modes, engine constraint; limit has sensible default). Output schema exists so return values needn't be described. Covers the complex filter DSL fully.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation: With 0% schema description coverage, the description provides detailed filter syntax (year>=YYYY, has:abstract, etc.), explains the relationship between author_name/candidate_index, and clarifies the engine constraint. Far exceeds baseline for undocumented schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent: 'Expand one author into a deduplicated paper list' provides specific verb (expand) + resource (papers) + key trait (deduplicated). 'Main author->paper traversal tool' distinguishes it from sibling search tools like `scholarfetch_search`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent: Explicitly states when to use `author_id` vs `author_name` + `candidate_index`, and mandates workflow 'after `scholarfetch_author_candidates`' (names sibling). Also notes requirement that `engines` must include 'openalex' if passed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_doi_lookupAInspect
Enrich one known DOI with metadata, reading links, and full-text availability signals. If you pass engines, use a comma-separated subset of: elsevier, openalex, crossref, arxiv, europepmc, springer, semanticscholar.
| Name | Required | Description | Default |
|---|---|---|---|
| doi | Yes | ||
| engines | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full behavioral disclosure burden. It adds valuable context by listing specific data sources (elsevier, openalex, etc.) and output types (full-text availability signals). However, it omits safety profile (idempotent?), error behavior for invalid DOIs, and whether results are cached or real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence front-loads purpose and outputs; second sentence provides precise parameter guidance. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but indicated), the description appropriately avoids detailing return values. For a two-parameter lookup tool, it adequately documents the non-obvious parameter (engines). Could improve by mentioning DOI format expectations or error handling for invalid identifiers.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates significantly by documenting the `engines` parameter format (comma-separated) and valid values (explicit list of 7 sources). The `doi` parameter semantics are implied by the tool's purpose but not explicitly described, preventing a perfect score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'enriches one known DOI with metadata, reading links, and full-text availability signals,' providing specific verb (enrich), resource (DOI), and output scope. It distinguishes from siblings like scholarfetch_search (which queries without known DOI) and scholarfetch_article_text (which extracts full text rather than metadata/links).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'one known DOI,' suggesting use when a specific DOI is already identified versus searching. However, it lacks explicit when-to-use guidance or named alternatives (e.g., 'use this instead of search when you have an exact DOI').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_referencesAInspect
Expand a paper into its references. Use with a DOI or with author_name + candidate_index + paper_index. This is the main edge-expansion tool for traversing the literature graph. If you pass engines, use a comma-separated subset of: elsevier, openalex, crossref, arxiv, europepmc, springer, semanticscholar.
| Name | Required | Description | Default |
|---|---|---|---|
| doi | No | ||
| engines | No | ||
| author_name | No | ||
| paper_index | No | ||
| candidate_index | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable context about 'traversing the literature graph' and discloses the comma-separated format for the engines parameter. However, it omits critical behavioral traits like read-only status, rate limits, idempotency, or error handling (e.g., what happens if a DOI is not found).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, zero waste: purpose declaration, usage patterns, positioning context, and parameter formatting details. Information is front-loaded with the core function ('Expand a paper'), followed by implementation specifics. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 5-parameter tool with zero schema coverage and an output schema present, the description adequately covers the two primary usage modes and engine configuration. The existence of an output schema excuses it from detailing return values. Minor gap in not explaining index semantics or default behaviors for optional parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description compensates well by explaining the relationship between author_name, candidate_index, and paper_index as a grouped pattern, and lists valid engine values. It loses one point for not explaining what the indices actually reference (e.g., which list they index into) or clarifying mutual exclusivity between the DOI and author patterns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Expand[s] a paper into its references' with a specific verb and resource. It distinguishes itself from siblings by positioning itself as 'the main edge-expansion tool for traversing the literature graph,' clearly differentiating it from abstract retrieval or search tools in the same suite.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit parameter grouping patterns ('Use with a DOI or with author_name + candidate_index + paper_index'), guiding the agent on required input combinations. However, it lacks explicit 'when-not-to-use' guidance or named alternatives (e.g., not mentioning when to use scholarfetch_abstract instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_saved_addAInspect
Add one paper to a named in-memory reading list on the MCP server. Best input is paper_json copied from another ScholarFetch tool result, but DOI, query+result_index, or author_name+candidate_index+paper_index also work. Reuse the same collection name across calls to keep one research session together.
| Name | Required | Description | Default |
|---|---|---|---|
| doi | No | ||
| query | No | ||
| engines | No | ||
| collection | No | default | |
| paper_json | No | ||
| author_name | No | ||
| paper_index | No | ||
| result_index | No | ||
| candidate_index | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It successfully communicates the 'in-memory' nature (volatile/temporary storage) and session-scoped persistence ('research session together'), but omits error handling (duplicate additions, missing papers), side effects, or output details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first front-loads the core action and input options; the second provides essential workflow guidance on collection naming. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 9 undocumented parameters and existence of an output schema, the description successfully covers the primary use case, input methodologies, and session management concept. It appropriately omits return value details (covered by output schema) but could strengthen edge case coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description compensates effectively by explaining input patterns: paper_json sourcing, DOI standalone usage, query+result_index pairing, and author_name+candidate_index+paper_index grouping. It also explains the collection parameter's session role. Only 'engines' remains completely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action ('Add one paper'), target resource ('named in-memory reading list'), and location ('on the MCP server'). It effectively distinguishes this tool from search/lookup siblings by specifying the 'saved' reading list functionality and session-based storage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance on input alternatives ('Best input is paper_json... but DOI, query+result_index... also work') and workflow patterns ('Reuse the same collection name... to keep one research session together'). It lacks explicit 'when not to use' guidance comparing it to saved_remove or saved_clear siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_saved_clearAInspect
Clear all papers from a named in-memory reading list. Useful when restarting a research branch.
| Name | Required | Description | Default |
|---|---|---|---|
| collection | No | default |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral context 'in-memory' (implying session-scoped storage) and 'clear all' (indicating bulk destructive operation). However, with no annotations provided, it omits reversibility warnings, side effects, or output format details despite the existence of an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states the action, second provides usage context. Appropriately front-loaded and sized for a single-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple destructive tool with one optional parameter, given the output schema handles return value documentation. However, gaps remain: no annotation coverage, 0% schema coverage, and missing explicit parameter documentation lower it from a 4.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage for the 'collection' parameter. Description mentions 'named' reading list, hinting at the parameter's purpose, but fails to explicitly document the parameter, its default value ('default'), or the fact that it is optional. Insufficient compensation for zero schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Clear' with resource 'papers from a named in-memory reading list.' Explicit 'all' distinguishes from sibling scholarfetch_saved_remove (which likely removes specific items), and 'in-memory reading list' scopes it to the saved_* tool family.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual guidance ('Useful when restarting a research branch') indicating when to use the tool. Lacks explicit 'when not to use' or named alternatives, but the contextual signal is strong enough for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_saved_exportAInspect
Export the current reading list as citations, abstracts, BibTeX, or an aggregated full-text corpus. Valid format values: citations, abstracts, bib, fulltext. Valid style values when format=citations: harvard, apa, ieee. Use include_references=true with format=fulltext when you want a richer downstream synthesis corpus.
| Name | Required | Description | Default |
|---|---|---|---|
| style | No | harvard | |
| format | No | citations | |
| engines | No | ||
| collection | No | default | |
| include_references | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that fulltext format produces an 'aggregated' corpus and hints at data richness with include_references, but omits safety information (destructive potential, auth requirements, rate limits) that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with zero waste: purpose statement, format values, style values, and advanced usage tip. Information is front-loaded and structured logically from basic to conditional usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters and an output schema (which excuses return value documentation), the description is incomplete due to unexplained 'engines' and 'collection' parameters. The valid value documentation for the other three parameters is thorough, but gaps remain for a tool with this parameter count.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. The description compensates by documenting valid enum-like values for 'format' (4 options) and 'style' (3 options), and explains the conditional behavior of 'include_references'. However, it completely omits semantics for 'engines' and 'collection' parameters, leaving 40% of the interface unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Export') and clear resource ('the current reading list'), explicitly distinguishing this from sibling tools like scholarfetch_saved_add or scholarfetch_saved_list by focusing on export functionality rather than CRUD operations on the list itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit valid values for format and style parameters, and includes conditional guidance ('when `format=citations`', 'when you want a richer downstream synthesis corpus'). Lacks explicit 'when not to use' or named alternatives, but the value constraints effectively guide proper invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_saved_listAInspect
List all papers currently saved in a named in-memory reading list. Use this to inspect the working set before exporting or removing items.
| Name | Required | Description | Default |
|---|---|---|---|
| collection | No | default |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds critical behavioral context that the reading list is 'in-memory' (implying non-persistent storage) and 'named' (indicating support for multiple collections). Does not mention output format, but output schema exists to cover this gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence front-loads the core action (List), second sentence provides workflow context. Every word earns its place; no redundancy or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple single-parameter tool with an output schema. Covers purpose, usage timing, and key behavioral trait (in-memory storage). Does not explicitly mention the default collection value, but this is discoverable in the schema. Output schema handles return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (no parameter descriptions), so description must compensate. Mentions 'named' reading list, loosely implying the collection parameter specifies the list name, but does not explicitly document parameter semantics or the default value of 'default'. Provides partial compensation but not full documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('List') and resource ('papers currently saved in a named in-memory reading list'). Explicitly distinguishes from sibling modification tools like scholarfetch_saved_add, scholarfetch_saved_remove, and scholarfetch_saved_export by specifying this is for inspection/reading only.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Use this to inspect the working set before exporting or removing items.' This provides clear workflow sequencing guidance and references related sibling operations (exporting and removing) without naming them explicitly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_saved_removeAInspect
Remove one paper from a named in-memory reading list by DOI or exact title.
| Name | Required | Description | Default |
|---|---|---|---|
| doi | No | ||
| title | No | ||
| collection | No | default |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses 'in-memory' persistence (critical behavioral trait) and 'named' collections, but omits error handling (what if DOI/title not found?), idempotency, or return value semantics. Output schema exists, reducing burden for return values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Front-loaded with action ('Remove'), followed by scope ('one paper'), resource ('named in-memory reading list'), and parameter hint ('by DOI or exact title'). Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description appropriately focuses on operation semantics rather than return values. Adequately covers the 3 parameters despite zero schema coverage. Minor gap: doesn't mention error conditions or the fact that 'exact title' requires precise matching.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates effectively: 'named...reading list' maps to 'collection', 'by DOI or exact title' explains the two identifier parameters. However, it doesn't clarify that these are mutually exclusive identifiers or that 'exact title' implies case-sensitive/precise matching.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb (remove), clear resource (paper from reading list), and scope (one paper). It effectively distinguishes from siblings: contrasts with 'saved_add' (addition), 'saved_clear' (bulk deletion), and 'saved_list' (reading), making the selection criteria unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies context through 'named in-memory reading list' (suggests prior saving is required), but lacks explicit when-to-use guidance or comparison to alternatives like 'saved_clear' for bulk removal. Agent must infer this is for targeted single-item removal versus bulk operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scholarfetch_searchAInspect
Start a research traversal from keywords, a DOI, or a person name. Returns deduplicated paper records that you can inspect, save, expand through references, or use as seeds for author exploration. If you pass engines, use a comma-separated subset of: elsevier, openalex, crossref, arxiv, europepmc, springer, semanticscholar.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| engines | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses deduplication behavior and the multi-engine aggregation concept, but lacks other critical behavioral details like rate limits, pagination, caching policies, or query syntax rules (e.g., boolean operators).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two densely packed sentences. The first covers purpose and workflow capabilities; the second covers parameter syntax for engines. No repetition of schema titles or unnecessary filler. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with zero schema descriptions and no annotations, the description adequately compensates for the schema deficiency while leveraging the existence of an output schema (so return values needn't be detailed). Could be improved by noting any query syntax limitations or result set constraints, but sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides essential compensation: specifies query accepts 'keywords, a DOI, or a person name' and details the engines parameter format as 'comma-separated subset' with explicit valid values. Only the limit parameter is undocumented, which is acceptable given its self-explanatory nature and default value in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Start[s] a research traversal' with specific input types (keywords, DOI, person name) and distinguishes itself from siblings by emphasizing it's the entry point for workflows involving author exploration and reference expansion, contrasting with sibling tools like scholarfetch_author_papers or scholarfetch_references that handle specific follow-up operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear workflow context by stating returned records can be used to 'expand through references' or as 'seeds for author exploration,' implying this is the discovery/starting tool versus specialized siblings. However, it doesn't explicitly state when NOT to use it or name specific alternative tools for direct lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!