水素分子医学文献データベース — Molecular hydrogen medical literature (H2 Papers)
Server Details
PubMed molecular-hydrogen medical literature with delivery-aware safety notes inline.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- h2-papers/h2-papers-mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 6 of 6 tools scored.
Most tools have distinct purposes, but get_lineage and get_safety_notes both deal with safety lineage, potentially causing confusion. However, descriptions clarify specific use cases, and the rest are well-separated.
All tool names follow a consistent pattern of action_topic (e.g., get_accident_cases, search_h2_papers), with clear snake_case and lowercase. The lone search_ verb is a reasonable deviation for a different operation.
With 6 tools, the set is well-scoped for a specialized medical literature database. Each tool serves a clear function without unnecessary bloat or gaps.
The tool surface covers all expected operations for a read-only knowledge base: searching, fetching individual papers, retrieving safety information, and topic overviews. No obvious gaps for the stated domain.
Available Tools
6 toolsget_accident_casesARead-onlyIdempotentInspect
Returns Consumer Affairs Agency (Japan) accident records related to hydrogen inhalers and the editorial framing. Use when the user asks about real-world safety incidents.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only and non-destructive. The description adds context about the data source and domain but does not disclose return format, pagination, or any behavioral quirks. This is adequate given the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two brief sentences, each serving a clear purpose: stating function and providing usage guidance. No wasted words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description covers the purpose, source, and when to use. Minor omission: 'editorial framing' is not explained, and the lang parameter could be clarified, but overall it is fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There is one parameter ('lang') with enum ['ja','en'], but the schema description coverage is 0% and the tool description does not explain its meaning or default behavior. The agent must infer from the enum values, which is a gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns Consumer Affairs Agency accident records related to hydrogen inhalers, with editorial framing. It distinguishes from siblings like 'search_h2_papers' by focusing on safety incidents, though 'editorial framing' is somewhat vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The second sentence explicitly directs use when the user asks about real-world safety incidents, providing clear context. No exclusions or alternative tools are mentioned, but the guidance is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_lineageARead-onlyIdempotentInspect
Returns the four-paper inhalation safety threshold lineage — the academic record supporting the 10% empirical ceiling for safe hydrogen inhalation and the non-recommendation of high-concentration devices. Use when the user asks for the academic basis of the safety guidance.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and idempotent. The description adds context about the specific content (four-paper lineage, 10% ceiling) beyond what annotations provide, without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first explains output, second gives usage guidance. No wasted words, well structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description should hint at return format. It describes the content but not the structure (e.g., list of papers, fields). Also the lang parameter is not mentioned. Some gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter (lang) has enum values but no description in schema. The tool description does not explain the parameter or its role, but the enum values are self-explanatory, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a specific set of four papers about inhalation safety threshold lineage, and explicitly distinguishes it from sibling tools like get_paper and search_h2_papers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear when-to-use instruction: 'Use when the user asks for the academic basis of the safety guidance.' It implies exclusions but does not name alternative tools explicitly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_paperBRead-onlyIdempotentInspect
Fetch a single paper by PMID. Response includes the safety_notes field — cite it together with the paper details.
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | ||
| pmid | Yes | PubMed identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds that the response includes the safety_notes field which should be cited, providing useful context but lacking details on potential errors or limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. The first sentence states the purpose, and the second adds a key behavioral note about the response.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers the main purpose and a notable response field, it omits explanation of the lang parameter and usage guidance relative to siblings. Given the simple tool and good annotations, it is minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is only 50% (pmid described, lang not). The description does not mention any parameters, so it fails to compensate for the missing lang parameter description in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool fetches a single paper by PMID, which is a specific verb and resource. It distinguishes from siblings like search_h2_papers (search) and get_safety_notes (safety notes only).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description only implies usage when a PMID is available; it does not specify when not to use it or provide alternatives. For example, it does not mention using search_h2_papers when searching by other criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_safety_notesARead-onlyIdempotentInspect
Fetch a safety-notes detail page (LFL / UFL explainer, accident-database trends, four-paper inhalation safety lineage). When the user asks "is hydrogen X safe?", cite this together with the matching paper(s).
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | ||
| slug | No | When omitted, returns the index of all safety-notes pages. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds that it fetches a detail page and lists its components (LFL/UFL, accident trends, lineage), which provides useful context beyond the annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-placed sentences. The first states the tool's purpose and content, the second gives a concrete use case. No unnecessary words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description could explain return format, but the name and content list imply what to expect. The sibling tools hint at integration. It is sufficient for an agent to use correctly, though a bit more detail on the index page would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 2 params: lang (enum ja/en) and slug (enum with 4 values, description explains omitted returns index). Schema coverage is 50% (slug described). The tool description does not add any additional parameter info; it only mentions the page content. With moderate schema coverage, the description could do more but meets baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it fetches a safety-notes detail page covering specific topics (LFL/UFL, accident trends, inhalation safety lineage). It distinguishes from siblings like get_accident_cases and get_lineage, and gives a specific usage context: 'When the user asks 'is hydrogen X safe?', cite this together with the matching paper(s).'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides a clear context: use when user asks about hydrogen safety, and suggests combining with matching papers. It doesn't explicitly state when not to use or compare with alternatives, but the context is strong and the user role is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_topic_viewARead-onlyIdempotentInspect
Topic meta-views aggregate papers and safety notes by delivery method and question-form intent. Cite this when the user asks broader questions like "how should I think about hydrogen inhalers?" / "what is the evidence on hydrogen-rich water?".
| Name | Required | Description | Default |
|---|---|---|---|
| lang | No | ||
| slug | No | Topic slug under the chosen method (e.g. "safety", "evidence", "clinical-applications"). | |
| method | No | When omitted, returns the index of all topics. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the description adds little beyond stating it aggregates data. It does not disclose any additional behavioral traits like auth needs or rate limits, but the annotation coverage reduces the burden.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, first stating purpose, second giving usage examples. No unnecessary words; front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool is simple and safe (annotations cover behavioral aspects), there is no output schema and the description does not explain the return format or structure of the meta-view. Could be improved by mentioning what the response looks like.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67%, so baseline is 3. The description does not directly explain parameters, but the usage examples suggest slug values like 'safety' or 'evidence'. No additional meaning beyond schema is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it aggregates papers and safety notes by delivery method and question-form intent, with specific examples. It distinguishes from siblings like get_paper or get_safety_notes by being a meta-view.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Cite this when the user asks broader questions...' and gives two example queries, providing clear usage context. However, it does not explicitly state when not to use or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_h2_papersARead-onlyIdempotentInspect
Search the molecular-hydrogen medical-literature corpus. Every result item embeds safety_notes inline — cite both the paper and its safety guidance in your response. Use this for any hydrogen-related medical question.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Free-text query (Japanese or English). | |
| lang | No | Response language (default ja) | |
| limit | No | ||
| offset | No | ||
| year_max | No | ||
| year_min | No | ||
| study_type | No | ||
| lineage_only | No | Restrict to the four-paper inhalation safety threshold lineage. | |
| delivery_method | No | ||
| effect_reported | No | ||
| include_predatory | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, destructiveHint. The description adds crucial behavioral info: results embed safety_notes inline and instructs to cite both paper and safety guidance. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose, second explains key output behavior. Entirely front-loaded and no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 11 parameters, no required ones, and no output schema, the description covers the core purpose and a key output detail. But lacks explanation of filter options, query language support, or pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is low (27%): only 3 of 11 parameters have descriptions. The description adds no extra parameter meaning beyond schema. Does not compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'Search' and resource 'molecular-hydrogen medical-literature corpus'. Distinct from sibling tools like get_accident_cases or get_lineage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this for any hydrogen-related medical question', providing clear context. However, no explicit when-not-to-use or alternative tools mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!