Skip to main content
Glama
moldsim
by moldsim

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool serves a distinct purpose with clear boundaries: material lookup vs. comparison, DFM checklist generation vs. simulation spec generation, knowledge querying, and process validation. No functional overlap exists.

    Naming Consistency5/5

    All six tools follow a consistent verb_noun snake_case pattern (compare_materials, generate_dfm_checklist, get_material_properties, etc.). Verb choices clearly indicate the action type (get vs. generate vs. validate vs. query).

    Tool Count5/5

    Six tools is an ideal count for this domain, covering the complete pre-simulation workflow: material selection, DFM validation, simulation setup, process validation, and knowledge support without bloat or redundancy.

    Completeness4/5

    Excellent coverage of injection molding preparation workflows including material data, DFM checks, and process validation. Minor gap in actual simulation execution or results analysis tools, though this may be out of scope for a specification/preparation server.

  • Average 4/5 across 6 of 6 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.4.3

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 6 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully discloses output content ('analysis types, process conditions, mesh recommendations, and expected results'), but fails to indicate whether this operation is read-only, destructive, idempotent, or has side effects like persistence to a database.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with zero waste. The first sentence establishes the core function and input method; the second sentence details the output components. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the 100% schema coverage and lack of output schema, the description adequately covers the tool's purpose and output contents. However, for a generation tool with no annotations, it should ideally disclose the output format (JSON vs string) and safety characteristics, which are currently absent.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'natural language description of the part and requirements,' which reinforces the required 'description' parameter, but adds no additional semantic detail for the optional 'cad_format' parameter or guidance on input syntax beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool generates a 'structured simulation specification' from 'natural language description,' specifying the transformation performed. It distinguishes from siblings like 'validate_process_parameters' (which checks existing values) and 'query_simulation_knowledge' (which retrieves data) by emphasizing the generative nature and natural language input requirement, though it doesn't explicitly contrast with the sibling 'generate_dfm_checklist'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage through the phrase 'from a natural language description,' indicating when to use this tool (when unstructured text input is available). However, it lacks explicit guidance on when NOT to use it or which sibling to use instead (e.g., 'use validate_process_parameters if you already have specific parameters to check').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses the output format ('table of processing, thermal, and mechanical properties with key differences highlighted') and enumerates all valid material IDs. However, it lacks disclosure of error handling, rate limits, or read-only status.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three well-structured sentences: purpose/action first, return value second, valid inputs third. The material list is lengthy but necessary given the lack of schema enums. Every clause provides actionable information with no filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter tool with no output schema, the description adequately covers the return value structure and valid input domain. It misses explicit error handling documentation, but provides sufficient context for successful invocation given the tool's limited complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    While the schema has 100% coverage and describes the parameter structure, the description adds crucial semantic value by enumerating all 20 valid material ID strings (abs-generic, pp-homo, etc.) that the schema does not provide as enums. This compensates for the schema's lack of constraint enumeration.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Compare'), resource ('materials'), and scope ('2-4 materials side-by-side'). It distinguishes from sibling get_material_properties by emphasizing the comparative aspect and multi-material support (2-4 items).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The '2-4 materials' constraint implies usage boundaries, but there is no explicit guidance on when to use this versus get_material_properties (for single material lookup) or other siblings. The agent must infer the comparison use case from the verb alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and successfully discloses the return format (pass/warn/fail status) and evaluation scope (15+ design rules), though it omits error handling behavior and side effects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences efficiently communicate purpose and return behavior without redundancy. Every word earns its place, with no filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of output schema, the description appropriately discloses the return structure. With 100% schema coverage handling inputs, the description is nearly complete, though noting that only 'description' is required while other parameters are optional would improve it further.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing a baseline of 3. The description references 'part description and parameters' generically without adding semantic meaning, relationships between parameters (e.g., rib-to-wall thickness ratios), or input examples beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb ('Generate'), clear resource ('Design for Manufacturability checklist'), and domain ('injection molded part'), effectively distinguishing it from siblings like validate_process_parameters and compare_materials.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the domain specificity ('injection molded part') provides implicit context, the description lacks explicit when-to-use guidance or differentiation from siblings like validate_process_parameters that also operate in the manufacturing domain.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It discloses the scope of the knowledge base (troubleshooting, DFM rules, mesh guidelines, process parameters), but omits behavioral details like return format, result count limits, or whether results include citations/source references.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences efficiently structured: (1) core action, (2) coverage scope, (3) usage context. Every sentence earns its place with no redundancy or generic filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of annotations and output schema, the description adequately compensates by detailing the knowledge domains covered. It could be improved by noting the return type (e.g., articles, troubleshooting steps), but provides sufficient context for a search-oriented tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage ('Your question...' and 'Additional context...'), establishing baseline 3. The description mentions the types of questions covered but does not add parameter-specific semantics, examples, or format constraints beyond the schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with the specific action 'Search' and resource 'injection molding simulation knowledge base.' It clearly distinguishes from siblings by positioning itself as a general knowledge retrieval tool versus specific operations like 'compare,' 'generate,' 'get,' or 'validate' offered by sibling tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit usage guidance with 'Use for general questions about molding, simulation setup, or problem-solving,' indicating when to select this tool over more specialized siblings. However, it lacks explicit 'when not to use' exclusions or named alternatives.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It compensates well by detailing exactly what data is returned (Cross-WLF viscosity model, 2-domain Tait PVT, thermal/mechanical data) and listing the constrained set of available materials, effectively communicating scope and output format.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three dense sentences with zero waste: sentence 1 states purpose, sentence 2 details return values, sentence 3 lists valid inputs. Well front-loaded with the action verb and appropriately sized for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the rich input schema (100% coverage) and absence of output schema, the description adequately compensates by describing return data types (Cross-WLF, PVT models) and valid material inputs. Only missing safety/performance annotations prevent a 5.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema coverage (baseline 3), the description adds significant value by enumerating the specific available materials (abs-generic, pp-homo, etc.), which acts as implicit enum documentation beyond the schema's generic 'Material name, family, or ID' description. This concrete list helps agents select valid inputs.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'Look up' with clear resource 'material properties' and domain context 'injection molding simulation'. It effectively distinguishes from siblings like compare_materials (which implies multiple materials) and generate_dfm_checklist (which is procedural rather than data retrieval).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description provides specific domain context that implies when to use it (when you need injection molding property data), it lacks explicit guidance on when to prefer sibling tools like compare_materials or validate_process_parameters. No explicit exclusions or alternatives are named.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It adds valuable behavioral context that the tool 'flags out-of-range values with suggestions' and performs calculations (shear rate estimation, cooling time). However, it omits whether this is read-only (likely yes), if there are rate limits, or specific error behaviors.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two tightly constructed sentences with zero waste. Front-loaded with the core purpose (validation against processing window), followed by specific actions and output format (flags with suggestions). Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 7 parameters with 100% schema coverage and no output schema, the description adequately covers the input semantics and explains the conceptual output (flags, suggestions). It could be improved by describing the return structure (e.g., whether it returns a report object or boolean with warnings), but provides sufficient context for an agent to invoke the tool correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage, baseline is 3. The description adds semantic value by mapping parameters to validation logic: melt/mold temperature checks, shear rate estimation (implied from injection speed), and cooling time calculations. This helps the agent understand why each parameter matters to the validation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'Validate' with clear resource 'injection molding process parameters' and scope 'against material processing window'. It clearly distinguishes from siblings (compare_materials, generate_dfm_checklist, etc.) by specifying this is a validation operation, not comparison, generation, or retrieval.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context that this is for validating injection molding parameters, implying use during process setup or quality checks. However, it lacks explicit 'when to use' guidance or named alternatives for scenarios like material selection (where compare_materials would be more appropriate).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

moldsim-mcp MCP server

Copy to your README.md:

Score Badge

moldsim-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/moldsim/moldsim-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server