pubmed-mcp-server
Server Details
Search PubMed, fetch articles and full text, generate citations, and explore MeSH terms via NCBI.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- cyanheads/pubmed-mcp-server
- GitHub Stars
- 78
- Server Listing
- pubmed-mcp-server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
9 toolspubmed_convert_idsPubmed Convert IdsARead-onlyInspect
Convert between article identifiers (DOI, PMID, PMCID). Accepts up to 50 IDs of a single type per request. Uses the NCBI PMC ID Converter API — only resolves articles indexed in PubMed Central. For articles not in PMC, use pubmed_search_articles instead.
| Name | Required | Description | Default |
|---|---|---|---|
| ids | Yes | Article identifiers to convert. All IDs must be the same type. DOIs: "10.1093/nar/gks1195", PMIDs: "23193287", PMCIDs: "PMC3531190". | |
| idtype | Yes | The type of IDs being submitted. Required so the API can unambiguously resolve them. |
Output Schema
| Name | Required | Description |
|---|---|---|
| records | Yes | Conversion results, one per input ID |
| totalConverted | Yes | Number of IDs successfully converted |
| totalSubmitted | Yes | Number of IDs submitted |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds crucial context beyond annotations: identifies the specific external API ('NCBI PMC ID Converter API') and discloses data coverage limitations ('only resolves articles indexed in PubMed Central'). Annotations indicate read-only and open-world, but description explains what external system is contacted and its scope restrictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose first, constraints second, behavioral details and alternative third. Every clause earns its place. Appropriate length for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage, presence of annotations (readOnlyHint, openWorldHint), and existence of output schema, the description provides complete contextual framing. It covers purpose, limitations, external dependencies, and fallback tools without needing to describe return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed examples (DOI formats, PMID/PMCID syntax). Description reinforces the 50-ID limit and single-type requirement but does not add syntactic or semantic details beyond what the schema already provides. Baseline 3 is appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Convert') + resource ('article identifiers') + scope ('DOI, PMID, PMCID'). Distinguishes from siblings by clarifying it uses the 'NCBI PMC ID Converter API' rather than general PubMed search or fetch operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the limitation ('only resolves articles indexed in PubMed Central') and names the exact alternative tool ('use pubmed_search_articles instead'). Also clarifies batch constraint ('Accepts up to 50 IDs of a single type').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pubmed_fetch_articlesPubmed Fetch ArticlesARead-onlyInspect
Fetch full article metadata by PubMed IDs. Returns detailed article information including abstract, authors, journal, MeSH terms.
| Name | Required | Description | Default |
|---|---|---|---|
| pmids | Yes | PubMed IDs to fetch | |
| includeMesh | No | Include MeSH terms | |
| includeGrants | No | Include grant information |
Output Schema
| Name | Required | Description |
|---|---|---|
| articles | Yes | Parsed articles |
| totalReturned | Yes | Number of articles returned |
| unavailablePmids | No | PMIDs that returned no article data |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and openWorldHint=true; description adds value by specifying return contents (abstract, authors, journal, MeSH terms) without contradicting annotations. Does not mention rate limits or error handling for invalid IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences: first defines the action and input method, second summarizes return values. No redundant information; every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given the output schema exists and annotations are present. Covers core functionality and return value summary. Could mention the 200-item batch limit or grant inclusion capability, but not strictly necessary given schema coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, baseline is 3. Description mentions 'PubMed IDs' aligning with pmids parameter and references 'MeSH terms' relating to includeMesh flag, but does not elaborate on parameter semantics beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Fetch' with clear resource 'article metadata by PubMed IDs' effectively distinguishes from siblings like pubmed_search_articles (which finds IDs) and pubmed_fetch_fulltext (which retrieves full text rather than metadata).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies prerequisites by stating 'by PubMed IDs,' suggesting IDs must be obtained first (likely via search), but lacks explicit when-to-use guidance comparing it to pubmed_search_articles or pubmed_fetch_fulltext alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pubmed_fetch_fulltextPubmed Fetch FulltextARead-onlyInspect
Fetch full-text articles from PubMed Central (PMC). Returns complete article body text, sections, and references for open-access articles. Accepts PMC IDs directly or PubMed IDs (auto-resolved via ELink).
| Name | Required | Description | Default |
|---|---|---|---|
| pmids | No | PubMed IDs to resolve to PMC full text. Provide this OR pmcids, not both. Only works for open-access articles available in PMC. | |
| pmcids | No | PMC IDs to fetch (e.g. ["PMC9575052"]). Provide this OR pmids, not both. | |
| sections | No | Filter to specific sections by title, case-insensitive (e.g. ["Introduction", "Methods", "Results", "Discussion"]) | |
| maxSections | No | Maximum top-level body sections | |
| includeReferences | No | Include reference list |
Output Schema
| Name | Required | Description |
|---|---|---|
| articles | Yes | Full-text articles |
| totalReturned | Yes | Number of articles returned |
| unavailablePmids | No | PMIDs not available in PMC |
| unavailablePmcIds | No | PMC IDs that returned no data |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With annotations covering read-only and external-system hints, the description adds valuable behavioral context including the return structure ('body text, sections, and references') and the 'auto-resolved via ELink' mechanism. It appropriately discloses the open-access constraint, though it omits rate limits or error behaviors for unavailable articles.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently convey distinct information: the fetch action, return content with open-access constraint, and input flexibility. There is zero redundancy; every clause provides specific value about functionality, limitations, or input formats.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Considering the presence of an output schema and comprehensive input annotations, the description adequately covers the tool's purpose, content limitations, and ID resolution behavior. It appropriately omits detailed return value specifications (covered by output schema), though explicitly mentioning the 10-article limit would strengthen completeness further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 100% schema description coverage, the baseline is appropriately 3. The description mentions the dual ID input methods (PMC vs PubMed) but essentially restates the schema documentation without adding significant semantic depth regarding the section filtering or the mutual exclusivity logic already present in parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb-resource pair ('Fetch full-text articles from PubMed Central') and distinguishes from sibling 'pubmed_fetch_articles' by emphasizing 'complete article body text, sections, and references.' The scope limitation to 'open-access articles' further clarifies the tool's specific capability and boundaries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage contexts by noting the tool requires 'open-access articles' and handles both PMC and PubMed IDs. However, it lacks explicit guidance on when to select this tool versus 'pubmed_fetch_articles' or alternatives for non-open-access content, leaving selection criteria implied rather than stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pubmed_format_citationsPubmed Format CitationsBRead-onlyInspect
Get formatted citations for PubMed articles in APA, MLA, BibTeX, or RIS format.
| Name | Required | Description | Default |
|---|---|---|---|
| pmids | Yes | PubMed IDs to cite | |
| styles | No | Citation styles to generate |
Output Schema
| Name | Required | Description |
|---|---|---|
| citations | Yes | Citations per article |
| totalFormatted | Yes | Number of PMIDs successfully formatted |
| totalSubmitted | Yes | Number of PMIDs submitted for citation formatting |
| unavailablePmids | No | Requested PMIDs that did not return article metadata |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and openWorldHint=true, covering safety and external access. The description adds value by specifying the four supported citation formats, but omits behavioral details like the 50-PMID limit, error handling for invalid IDs, or rate limiting considerations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with active verb ('Get'), no redundant or filler text. Efficiently communicates core functionality without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple two-parameter tool with full schema coverage and existing output schema. Missing minor helpful context like the 50-item limit or authentication requirements, but sufficient given structured fields handle the heavy lifting.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. The description lists the citation styles (APA, MLA, BibTeX, RIS) which mirrors the enum in the schema but doesn't add semantic depth about parameter usage (e.g., why one might choose RIS over BibTeX). Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Get formatted citations') and resource ('PubMed articles') with concrete output formats (APA, MLA, BibTeX, RIS). However, it doesn't explicitly specify input is by PMID nor distinguish from sibling 'pubmed_lookup_citation', leaving minor ambiguity about whether this searches or formats existing IDs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus siblings like 'pubmed_lookup_citation' or 'pubmed_fetch_articles'. No prerequisites, limitations, or 'when-not-to-use' advice is included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pubmed_lookup_citationPubmed Lookup CitationARead-onlyInspect
Look up PubMed IDs from partial bibliographic citations. Useful when you have a reference (journal, year, volume, page, author) and need the PMID. Uses NCBI ECitMatch for deterministic matching — more reliable than searching by citation fields.
| Name | Required | Description | Default |
|---|---|---|---|
| citations | Yes | Citations to look up. More fields = better match accuracy. |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes | Match results, one per input citation |
| totalMatched | Yes | Number of citations with PMID matches |
| totalSubmitted | Yes | Number of citations submitted |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and openWorldHint=true. The description adds valuable behavioral context by naming the specific backend service ('NCBI ECitMatch') and explaining the matching characteristic ('deterministic'), which helps the agent understand result reliability. It does not mention rate limits or error states, but the external dependency is disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 defines the core function, sentence 2 states the use case, and sentence 3 provides implementation detail and comparative advantage. Information is front-loaded and every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage for the single complex parameter (citations array) and the existence of an output schema, the description appropriately focuses on purpose and usage guidance rather than parameter mechanics. It could be improved by mentioning the 25-item limit or no-match behavior, but it is complete enough for an agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all citation fields (year, volume, journal, etc.). The description lists these fields ('journal, year, volume, page, author') but does not add semantic meaning, formats, or constraints beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Look up') and resource ('PubMed IDs'), clarifies the input type ('partial bibliographic citations'), and distinguishes the tool from siblings by noting it uses 'ECitMatch for deterministic matching — more reliable than searching by citation fields', implicitly contrasting it with pubmed_search_articles.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context ('Useful when you have a reference... and need the PMID') and explicitly contrasts this tool with searching ('more reliable than searching by citation fields'). It lacks explicit negative guidance (when NOT to use) or references to other siblings like pubmed_convert_ids, but the primary use case is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pubmed_lookup_meshPubmed Lookup MeshARead-onlyInspect
Search and explore MeSH (Medical Subject Headings) vocabulary. Essential for building precise PubMed queries.
| Name | Required | Description | Default |
|---|---|---|---|
| term | Yes | MeSH term to look up | |
| maxResults | No | Maximum results | |
| includeDetails | No | Fetch full MeSH records (scope notes, tree numbers, entry terms) |
Output Schema
| Name | Required | Description |
|---|---|---|
| term | Yes | Original search term |
| results | Yes | Matching MeSH records |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true, covering safety and external access. The description adds valuable use-case context (building precise queries) but does not elaborate on behavioral specifics like rate limits, pagination, or exact return structure beyond what the output schema provides.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence establishes purpose; second provides usage context. Every word earns its place and the description is appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of output schema (handling return value documentation), 100% parameter schema coverage, and readOnly annotations, the description provides sufficient context. It appropriately focuses on purpose and use-case rather than redundant technical details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing detailed descriptions for all three parameters (term, maxResults, includeDetails). The description mentions 'MeSH vocabulary' which aligns with the term parameter but does not add semantic guidance beyond what the schema already documents. Baseline 3 is appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states specific actions ('Search and explore') and specific resource ('MeSH vocabulary'). It clearly distinguishes this from sibling article-search tools by focusing on vocabulary/metadata rather than document retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context ('Essential for building precise PubMed queries') implying when to use it (query construction phase). However, lacks explicit 'when not to use' guidance or named alternative tools like pubmed_search_articles for direct article retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pubmed_search_articlesPubmed Search ArticlesARead-onlyInspect
Search PubMed with full query syntax, filters, and date ranges. Returns PMIDs and optional brief summaries. Supports field-specific filters (author, journal, MeSH terms), common filters (language, species, free full text), and pagination via offset for paging through large result sets.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort order: relevance (default), pub_date (newest first), author, or journal | relevance |
| query | Yes | PubMed search query (supports full NCBI syntax) | |
| author | No | Filter by author name (e.g. "Smith J") | |
| offset | No | Result offset for pagination (0-based) | |
| journal | No | Filter by journal name | |
| species | No | Filter by species | |
| language | No | Filter by language (e.g. "english") | |
| dateRange | No | Filter by date range | |
| meshTerms | No | Filter by MeSH terms | |
| maxResults | No | Maximum results to return | |
| hasAbstract | No | Only include articles with abstracts | |
| freeFullText | No | Only include free full text articles | |
| summaryCount | No | Fetch brief summaries for top N results (0 = PMIDs only) | |
| publicationTypes | No | Filter by publication type (e.g. "Review", "Clinical Trial", "Meta-Analysis") |
Output Schema
| Name | Required | Description |
|---|---|---|
| pmids | Yes | PubMed IDs |
| query | Yes | Original query |
| offset | Yes | Result offset used |
| searchUrl | Yes | PubMed search URL |
| summaries | Yes | Brief summaries (empty array when summaryCount is 0) |
| totalFound | Yes | Total matching articles |
| appliedFilters | Yes | Normalized filter values that were applied to the PubMed query |
| effectiveQuery | Yes | Sanitized query sent to PubMed after applying all active filters |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations (readOnlyHint, openWorldHint), the description adds valuable behavioral context about pagination ('offset for paging through large result sets') and clarifies the return format distinction (PMIDs vs. brief summaries). It appropriately hints at external API usage via 'NCBI syntax' mention.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three information-dense sentences. It front-loads the core purpose, follows with return value specifics, and concludes with capability enumeration. No redundant or filler language is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (14 parameters, nested objects) and rich structured metadata (100% schema coverage, annotations, output schema), the description successfully covers the high-level behavioral traits (pagination, return types) without needing to replicate parameter-level documentation. It could be improved by mentioning the relationship to sibling fetch tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents all 14 parameters. The description provides categorical grouping ('field-specific filters' vs 'common filters') which organizes the parameters conceptually, but does not add significant semantic meaning, syntax examples, or constraints beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the core action ('Search PubMed'), the scope ('full query syntax, filters, and date ranges'), and distinguishes itself from siblings by specifying it 'Returns PMIDs and optional brief summaries'—clearly positioning it as the discovery tool versus the 'fetch' sibling tools that likely require PMIDs as input.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies a workflow by stating it returns PMIDs (suggesting it precedes fetch operations), it lacks explicit guidance on when to use this versus siblings like pubmed_fetch_articles or pubmed_find_related. It does not state prerequisites or exclusions (e.g., 'use this when you don't already have PMIDs').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pubmed_spell_checkPubmed Spell CheckARead-onlyInspect
Spell-check a query and get NCBI's suggested correction. Useful for refining search queries.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | PubMed search query to spell-check |
Output Schema
| Name | Required | Description |
|---|---|---|
| original | Yes | Original query |
| corrected | Yes | Corrected query (same as original if no suggestion) |
| hasSuggestion | Yes | Whether NCBI suggested a correction |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds context that it connects to 'NCBI' (aligning with openWorldHint), but does not disclose additional behavioral traits like what happens when no corrections exist or how many suggestions are returned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence defines the action and return value, second provides usage context. Appropriately front-loaded and sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter, simple structure, and an output schema exists, the description provides sufficient context. It could be improved by noting this is specifically for PubMed queries, but the tool name makes this clear.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents the 'query' parameter. The description does not add supplementary semantics, examples, or format constraints beyond what the schema provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action ('Spell-check a query'), the external resource ('NCBI's suggested correction'), and distinguishes clearly from sibling tools which focus on fetching articles, converting IDs, or formatting citations rather than query validation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on when to use ('Useful for refining search queries'), indicating it should be used for query validation before searching. However, it does not explicitly mention when NOT to use it or name specific sibling alternatives like pubmed_search_articles.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.