Skip to main content
Glama

Server Quality Checklist

58%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.11

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 8 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations cover read-only/idempotent safety. Description adds valuable behavioral context: output format (PNG) and content type (2D structure). Does not mention error handling for invalid CIDs, rate limits, or caching behavior, which would be helpful given openWorldHint.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, zero waste. Front-loaded with action (Fetch), output format (PNG), and key constraint (by CID). Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Sufficient for a 2-parameter retrieval tool with full schema coverage and output schema present. Description adequately covers the tool's function without redundancy. Minor gap: does not acknowledge the optional 'size' parameter existence in prose, though schema is comprehensive.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing baseline 3. Description mentions 'by CID' confirming the required parameter's purpose, but adds no further syntax or semantic details beyond what the schema already provides for 'size' (small/large dimensions).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Fetch' + precise resource '2D structure diagram (PNG image)' + key parameter 'by CID'. Clearly distinguishes from siblings like pubchem_get_compound_details (data) and pubchem_search_compounds (search).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context that this returns an image file format (PNG) versus data, helping agents select it for visualization needs. Lacks explicit comparison to specific siblings (e.g., 'use get_compound_details for structured data instead'), but the output type distinction is unambiguous.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations declare read-only and idempotent characteristics, the description adds crucial behavioral context: 'Results are capped per type with total counts reported.' This explains the limiting behavior and response structure beyond what annotations provide, though it omits rate limits or error conditions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences total with zero waste. The first front-loads the core purpose and data types; the second provides essential behavioral constraints. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema and comprehensive input schema, the description appropriately focuses on purpose and behavioral quirks (capping) rather than return values. It adequately covers the tool's scope for a read-only lookup operation, though it could mention external service dependencies implied by openWorldHint.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents all parameters including the rationale for maxPerType ('prevents bloat'). The description lists the xref types but does not add semantic meaning beyond what the schema already provides, warranting the baseline score for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states a specific verb ('Get') and resource ('external database cross-references') and enumerates the exact xref types retrieved (PubMed, patents, genes, etc.). This clearly distinguishes it from siblings like get_compound_details (general metadata) or get_compound_image (visual data).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage through the list of xref types (use when you need PubMed citations, patents, etc.), but provides no explicit when-to-use guidance, when-not-to-use warnings, or named alternatives among the sibling tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true, idempotentHint=true, and openWorldHint=true. The description adds that the tool returns assay IDs (AIDs) and implies the external data scope by mentioning UniProt and NCBI Gene IDs, aligning with openWorldHint. However, it omits error handling behaviors, rate limits, or what happens when no assays are found.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with no wasted words. The first establishes purpose and scope; the second covers input methods, return values, and workflow integration. Information is front-loaded and dense.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the presence of an output schema and comprehensive input documentation, the description adequately covers the tool's purpose, return type (AIDs), and next-step workflow. It appropriately omits detailed return value explanations (covered by output schema) but could benefit from noting result size implications or error states.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is 3. The description reinforces the schema by listing example query types (gene symbol, protein name, etc.) but does not add substantive semantic meaning beyond what the input schema already documents for the three parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Find PubChem bioassays associated with a biological target,' providing a specific verb and resource. It distinguishes from sibling tools by explicitly mentioning that results 'can be explored further with pubchem_get_summary,' clarifying its role in the workflow versus tools like pubchem_search_compounds.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides workflow guidance by naming pubchem_get_summary as the follow-up tool for exploring returned AIDs. However, it lacks explicit guidance on when to use this assay search versus the compound search (pubchem_search_compounds) or when to prefer pubchem_get_bioactivity directly.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations cover read-only/idempotent traits, but the description adds valuable behavioral context: it specifies the exact data fields returned (signal word, pictograms, H-codes, P-codes) and discloses data provenance ('sourced from PubChem depositors') and attribution practices.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two efficiently structured sentences with zero waste. The first sentence defines the operation, the second details return values and data source—every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has a single parameter, good annotations, and an output schema, the description is complete. It covers functionality, return content structure, and data sourcing without needing to replicate detailed return value specifications.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already fully documents the 'cid' parameter. The description adds no specific parameter guidance, meeting the baseline expectation for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') and clearly identifies the resource (GHS hazard classification and safety data). It distinguishes itself from siblings like pubchem_get_bioactivity and pubchem_get_compound_image by specifying the exact domain (safety/GHS data).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While it doesn't explicitly name alternatives, the description provides clear context for when to use this tool (when seeking GHS classifications, signal words, pictograms, H-codes, and P-codes) which distinctly separates it from bioactivity or general compound detail tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnly/idempotent/openWorld traits. Description adds valuable operational constraint not in annotations: 'Up to 10 per call' (batch limit). Also clarifies the specific ID formats expected for each entity type, adding context beyond the structured hints.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, zero waste. Front-loaded with action verb ('Get'), follows with scope clarification ('Supports...'), and ends with operational constraint ('Up to 10'). Every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 2 parameters with 100% schema coverage, existing output schema, and read-only annotations, the description provides sufficient context: purpose, entity scope, and batch limits. No gaps requiring additional explanation for correct invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage with detailed identifier format examples. Description reinforces entity type semantics ('assays (AID)', etc.) but largely echoes the schema documentation. With complete schema coverage, baseline 3 is appropriate as description adds minimal semantic depth beyond structured fields.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb ('Get') + resource ('descriptive summaries for PubChem entities') + scope ('by ID'). Explicitly lists supported entity types (assays, genes, proteins, taxonomy) which clearly distinguishes this from compound-focused siblings like pubchem_get_compound_details.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context by enumerating supported entity types (AID, Gene ID, UniProt, Tax ID), implicitly signaling this is not for compounds. However, lacks explicit 'when not to use' guidance or direct sibling comparison (e.g., does not explicitly state 'for compounds use pubchem_get_compound_details').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations confirm read-only/idempotent behavior, the description adds valuable context about the data structure returned (assay lists, quantitative binding constants, target protein names) and the domain meaning of 'activity outcomes.' This helps the agent understand what constitutes a 'bioactivity profile' beyond the annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two well-constructed sentences with zero waste. The first front-loads the core capability and data types (assays, outcomes, targets, quantitative values); the second provides actionable filtering guidance. Every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With 100% schema coverage, present annotations, and an existing output schema, the description appropriately focuses on domain semantics (bioactivity terminology like IC50/Ki) rather than repeating structural details. The level of detail matches the tool's complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description adds semantic value by explaining the investigative purpose of the outcomeFilter ('focus on active results' for understanding biological profiles), which helps the agent select appropriate parameter values beyond the raw schema enums.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') and clearly identifies the resource ('compound's bioactivity profile'). It distinguishes from siblings by detailing unique bioactivity-specific content (assays, IC50/EC50/Ki values, target gene symbols) that none of the other compound tools provide.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear guidance on using the outcomeFilter parameter ('focus on active results'), implying when to filter versus retrieving all records. However, it lacks explicit comparison to sibling tools (e.g., when to use this versus pubchem_search_assays or pubchem_get_compound_details).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnly/idempotent/openWorld; description adds valuable behavioral context beyond these flags: 'Efficiently batches' signals performance optimization, and noting that includeDescription/includeClassification 'adds one API call per CID' warns about latency/cost implications. No contradictions with annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Information-dense single sentence (or compound sentence) with zero fluff. Front-loaded with core purpose ('Get detailed compound information by CID'), followed by parenthetical elaboration of return values, and closes with operational constraint ('Efficiently batches up to 100 CIDs'). Every clause serves selection or invocation guidance.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given high schema coverage, presence of output schema, and rich annotations, the description achieves completeness by mapping parameter flags to real-world data categories (pharmacology, therapeutic use, FDA classes). It appropriately delegates return value specifics to the output schema while providing sufficient high-level orientation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with detailed field descriptions. Description adds semantic grouping (e.g., categorizing the 25+ property options into 'physicochemical properties', explaining that drug-likeness covers 'Lipinski/Veber rules'), which helps the agent understand intent and content boundaries beyond the mechanical schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with specific verb-resource 'Get detailed compound information by CID' and comprehensively enumerates return categories (physicochemical properties, textual description, synonyms, drug-likeness, pharmacological classification). The mention of 'batches up to 100 CIDs' clearly distinguishes this from single-lookup or search-oriented siblings.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context through detailed feature enumeration (e.g., specific properties like XLogP, TPSA, Lipinski rules), implicitly distinguishing it from siblings like get_compound_image or get_bioactivity. Lacks explicit 'when not to use' or prerequisite guidance (e.g., doesn't mention that CIDs must be obtained via search_compounds first), but the scope is precisely delineated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Adds valuable operational constraints beyond annotations: batch limit (up to 25), Hill notation standard for formulas, 2D Tanimoto similarity method, and hydration behavior. Annotations cover safety/idempotency; description adds domain-specific mechanics. Does not mention rate limits or pagination.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Efficiently structured with front-loaded purpose, scannable bullet points for five complex modes, and a final sentence on optional hydration. Every line conveys essential information; no redundancy with schema or annotations.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for complexity (10 parameters, conditional requirements, 5 modes). With full schema coverage and output schema present, description successfully conveys high-level search semantics and cross-tool workflow (hydration vs. details call) without duplicating structured data.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage (baseline 3), the description enriches understanding by providing concrete examples (e.g., 'C6H12O6' for Hill notation, 'batch up to 25') and explaining the 'properties' parameter's purpose (avoiding follow-up calls). Adds meaningful context beyond raw schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb ('Search') + resource ('PubChem chemical compounds') + clear scope (five distinct search modes). The detailed enumeration of identifier, formula, substructure, superstructure, and similarity modes effectively distinguishes this from sibling 'get_' retrieval tools and pubchem_search_assays.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context for when to use each search mode and hints at optimization via 'hydrate results... to avoid a follow-up details call,' implicitly guiding toward pubchem_get_compound_details. Lacks explicit contrast with pubchem_search_assays or explicit 'when not to use' guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

pubchem-mcp-server MCP server

Copy to your README.md:

Score Badge

pubchem-mcp-server MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cyanheads/pubchem-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server