Skip to main content
Glama

水素分子医学文献データベース — Molecular hydrogen medical literature (H2 Papers)

Server Details

PubMed molecular-hydrogen medical literature with delivery-aware safety notes inline.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
h2-papers/h2-papers-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but get_lineage and get_safety_notes both deal with safety lineage, potentially causing confusion. However, descriptions clarify specific use cases, and the rest are well-separated.

Naming Consistency5/5

All tool names follow a consistent pattern of action_topic (e.g., get_accident_cases, search_h2_papers), with clear snake_case and lowercase. The lone search_ verb is a reasonable deviation for a different operation.

Tool Count5/5

With 6 tools, the set is well-scoped for a specialized medical literature database. Each tool serves a clear function without unnecessary bloat or gaps.

Completeness5/5

The tool surface covers all expected operations for a read-only knowledge base: searching, fetching individual papers, retrieving safety information, and topic overviews. No obvious gaps for the stated domain.

Available Tools

6 tools
get_accident_casesA
Read-onlyIdempotent
Inspect

Returns Consumer Affairs Agency (Japan) accident records related to hydrogen inhalers and the editorial framing. Use when the user asks about real-world safety incidents.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only and non-destructive. The description adds context about the data source and domain but does not disclose return format, pagination, or any behavioral quirks. This is adequate given the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two brief sentences, each serving a clear purpose: stating function and providing usage guidance. No wasted words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description covers the purpose, source, and when to use. Minor omission: 'editorial framing' is not explained, and the lang parameter could be clarified, but overall it is fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There is one parameter ('lang') with enum ['ja','en'], but the schema description coverage is 0% and the tool description does not explain its meaning or default behavior. The agent must infer from the enum values, which is a gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns Consumer Affairs Agency accident records related to hydrogen inhalers, with editorial framing. It distinguishes from siblings like 'search_h2_papers' by focusing on safety incidents, though 'editorial framing' is somewhat vague.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The second sentence explicitly directs use when the user asks about real-world safety incidents, providing clear context. No exclusions or alternative tools are mentioned, but the guidance is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_lineageA
Read-onlyIdempotent
Inspect

Returns the four-paper inhalation safety threshold lineage — the academic record supporting the 10% empirical ceiling for safe hydrogen inhalation and the non-recommendation of high-concentration devices. Use when the user asks for the academic basis of the safety guidance.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent. The description adds context about the specific content (four-paper lineage, 10% ceiling) beyond what annotations provide, without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first explains output, second gives usage guidance. No wasted words, well structured and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should hint at return format. It describes the content but not the structure (e.g., list of papers, fields). Also the lang parameter is not mentioned. Some gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter (lang) has enum values but no description in schema. The tool description does not explain the parameter or its role, but the enum values are self-explanatory, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns a specific set of four papers about inhalation safety threshold lineage, and explicitly distinguishes it from sibling tools like get_paper and search_h2_papers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a clear when-to-use instruction: 'Use when the user asks for the academic basis of the safety guidance.' It implies exclusions but does not name alternative tools explicitly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_paperB
Read-onlyIdempotent
Inspect

Fetch a single paper by PMID. Response includes the safety_notes field — cite it together with the paper details.

ParametersJSON Schema
NameRequiredDescriptionDefault
langNo
pmidYesPubMed identifier
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds that the response includes the safety_notes field which should be cited, providing useful context but lacking details on potential errors or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. The first sentence states the purpose, and the second adds a key behavioral note about the response.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description covers the main purpose and a notable response field, it omits explanation of the lang parameter and usage guidance relative to siblings. Given the simple tool and good annotations, it is minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is only 50% (pmid described, lang not). The description does not mention any parameters, so it fails to compensate for the missing lang parameter description in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches a single paper by PMID, which is a specific verb and resource. It distinguishes from siblings like search_h2_papers (search) and get_safety_notes (safety notes only).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description only implies usage when a PMID is available; it does not specify when not to use it or provide alternatives. For example, it does not mention using search_h2_papers when searching by other criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_safety_notesA
Read-onlyIdempotent
Inspect

Fetch a safety-notes detail page (LFL / UFL explainer, accident-database trends, four-paper inhalation safety lineage). When the user asks "is hydrogen X safe?", cite this together with the matching paper(s).

ParametersJSON Schema
NameRequiredDescriptionDefault
langNo
slugNoWhen omitted, returns the index of all safety-notes pages.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds that it fetches a detail page and lists its components (LFL/UFL, accident trends, lineage), which provides useful context beyond the annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-placed sentences. The first states the tool's purpose and content, the second gives a concrete use case. No unnecessary words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description could explain return format, but the name and content list imply what to expect. The sibling tools hint at integration. It is sufficient for an agent to use correctly, though a bit more detail on the index page would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 2 params: lang (enum ja/en) and slug (enum with 4 values, description explains omitted returns index). Schema coverage is 50% (slug described). The tool description does not add any additional parameter info; it only mentions the page content. With moderate schema coverage, the description could do more but meets baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it fetches a safety-notes detail page covering specific topics (LFL/UFL, accident trends, inhalation safety lineage). It distinguishes from siblings like get_accident_cases and get_lineage, and gives a specific usage context: 'When the user asks 'is hydrogen X safe?', cite this together with the matching paper(s).'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides a clear context: use when user asks about hydrogen safety, and suggests combining with matching papers. It doesn't explicitly state when not to use or compare with alternatives, but the context is strong and the user role is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_topic_viewA
Read-onlyIdempotent
Inspect

Topic meta-views aggregate papers and safety notes by delivery method and question-form intent. Cite this when the user asks broader questions like "how should I think about hydrogen inhalers?" / "what is the evidence on hydrogen-rich water?".

ParametersJSON Schema
NameRequiredDescriptionDefault
langNo
slugNoTopic slug under the chosen method (e.g. "safety", "evidence", "clinical-applications").
methodNoWhen omitted, returns the index of all topics.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description adds little beyond stating it aggregates data. It does not disclose any additional behavioral traits like auth needs or rate limits, but the annotation coverage reduces the burden.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, first stating purpose, second giving usage examples. No unnecessary words; front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the tool is simple and safe (annotations cover behavioral aspects), there is no output schema and the description does not explain the return format or structure of the meta-view. Could be improved by mentioning what the response looks like.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 67%, so baseline is 3. The description does not directly explain parameters, but the usage examples suggest slug values like 'safety' or 'evidence'. No additional meaning beyond schema is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it aggregates papers and safety notes by delivery method and question-form intent, with specific examples. It distinguishes from siblings like get_paper or get_safety_notes by being a meta-view.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Cite this when the user asks broader questions...' and gives two example queries, providing clear usage context. However, it does not explicitly state when not to use or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_h2_papersA
Read-onlyIdempotent
Inspect

Search the molecular-hydrogen medical-literature corpus. Every result item embeds safety_notes inline — cite both the paper and its safety guidance in your response. Use this for any hydrogen-related medical question.

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoFree-text query (Japanese or English).
langNoResponse language (default ja)
limitNo
offsetNo
year_maxNo
year_minNo
study_typeNo
lineage_onlyNoRestrict to the four-paper inhalation safety threshold lineage.
delivery_methodNo
effect_reportedNo
include_predatoryNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, destructiveHint. The description adds crucial behavioral info: results embed safety_notes inline and instructs to cite both paper and safety guidance. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose, second explains key output behavior. Entirely front-loaded and no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 11 parameters, no required ones, and no output schema, the description covers the core purpose and a key output detail. But lacks explanation of filter options, query language support, or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is low (27%): only 3 of 11 parameters have descriptions. The description adds no extra parameter meaning beyond schema. Does not compensate for the low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Search' and resource 'molecular-hydrogen medical-literature corpus'. Distinct from sibling tools like get_accident_cases or get_lineage.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this for any hydrogen-related medical question', providing clear context. However, no explicit when-not-to-use or alternative tools mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.