Skip to main content
Glama

Server Quality Checklist

50%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 17 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description must carry the full burden of behavioral disclosure. While 'Get' implies read-only access, the description fails to specify what 'detailed information' includes (e.g., object counts, render settings, collections) or the return format, leaving critical behavioral traits undocumented.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of a single, efficient sentence with no redundant words or filler content. It is appropriately front-loaded with the action and target, though it may be overly terse given the lack of output schema.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema and annotations in a complex 3D domain (Blender), the description inadequately compensates by failing to describe the structure or content of the returned 'detailed information.' The agent cannot determine what data fields to expect without invoking the tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters, which establishes a baseline score of 4 per evaluation rules. No parameter description is required or expected given the empty schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') and identifies the target resource ('current Blender scene'). However, it does not explicitly differentiate from sibling tools like `get_object_info`, relying on the agent to infer the distinction between 'scene' and 'object' scope.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives such as `get_object_info` for specific object data, `execute_blender_code` for custom queries, or `get_viewport_screenshot` for visual inspection. No prerequisites or conditions are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. While 'Get' implies a read-only operation, the description fails to specify return format/structure, what happens if the object_name is not found, or whether this operation has any performance impact on the Blender session. Lacks critical behavioral context for a data retrieval tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Efficiently structured with purpose stated first, followed by parameter documentation. No filler text. The inline parameter documentation is unconventional for MCP (normally belongs in schema) but is necessary given the 0% schema coverage. Minor deduction for not indicating that the object_name must exactly match an existing object.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a single-parameter tool, but gaps remain. Without an output schema, the description should ideally enumerate what 'detailed information' includes (e.g., location, rotation, scale, mesh statistics). Missing error contract for non-existent objects. Sufficient for basic invocation but leaves return value structure opaque.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0% (no description fields in schema properties), but the description compensates by documenting the single parameter: 'object_name: The name of the object to get information about.' This adds necessary semantic meaning beyond the schema's title 'Object Name', though it doesn't specify case sensitivity or whether partial names are accepted.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action ('Get detailed information') and resource ('specific object in the Blender scene'). Clearly scopes to object-level data, distinguishing it from sibling get_scene_info which handles scene-level data. However, 'detailed information' remains somewhat vague regarding exactly what attributes are returned (transform, geometry, materials).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives like execute_blender_code (which could also query object data) or get_scene_info. Does not mention prerequisites such as the object needing to exist in the scene, or error handling if it doesn't.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, and the description fails to disclose return format (list vs object), error conditions, or whether the call is cached. While 'Get' implies read-only behavior, explicit confirmation of safety would help given the absence of readOnlyHint annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two distinct sections (purpose statement and parameter documentation) create clear scannable structure. No filler text. The parameter documentation is efficiently packed into a single line with valid values included.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a single-parameter tool with no output schema, though it could specify whether 'all' returns a flat list or grouped categories. Missing return value documentation is a minor gap given the tool's simplicity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description effectively compensates by documenting the asset_type parameter and listing valid values (hdris, textures, models, all). This prevents the agent from guessing acceptable inputs for the string parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'categories' for Polyhaven assets, distinguishing it from sibling tools like search_polyhaven_assets and download_polyhaven_asset. The verb 'Get' and resource 'categories' are specific enough, though 'list' might have been more precise than 'Get a list'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this versus search_polyhaven_assets or when to specify 'all' versus specific asset types. The description lacks 'when to use' or 'when not to use' instructions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Mentions 'Returns a formatted list' but lacks detail on pagination, rate limits, authentication requirements, or what 'formatted' means (JSON vs string). Defaults are mentioned but also present in schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Structured clearly with purpose statement, Parameters section, and Returns section. The parameter list is necessary given schema lacks descriptions. No redundant or wasted sentences, though formal structure slightly reduces information density.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a search tool with 4 parameters and no output schema. Describes inputs well and mentions return type vaguely, but missing behavioral context (pagination, result structure) that would help an agent handle the response correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description effectively compensates by explaining all 4 parameters. Adds crucial semantic details not in schema: 'comma-separated list' for categories, 'Maximum number' for count, and 'only downloadable' for downloadable boolean.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb (Search) and resource (models on Sketchfab) clearly. Distinguishes from siblings by platform name (Sketchfab vs Polyhaven) and action (search vs download), though could explicitly mention the discovery-to-download workflow with download_sketchfab_model.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use versus alternatives like search_polyhaven_assets, or prerequisite steps needed before calling this tool. The filtering capabilities are listed but not contextualized within a workflow.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. While 'arbitrary Python code' implies high flexibility and potential risk, the description fails to disclose safety implications, state persistence, undo behavior, or what execution results look like given the lack of output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is brief and front-loaded with the core purpose, though the structure mixes operational guidelines with parameter documentation in a slightly informal way ('Make sure to...'). No redundant filler sentences.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a high-risk arbitrary code execution tool with no annotations and no output schema, the description is dangerously incomplete. It omits critical warnings about destructive capabilities, security sandboxing (or lack thereof), and how to interpret execution results.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description provides minimal but essential semantic context by specifying the 'code' parameter accepts 'Python code.' However, it lacks details on expected format, Blender API conventions, or example snippets necessary for a code execution tool.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Execute') and resource ('arbitrary Python code in Blender'), distinguishing it distinctly from sibling tools which focus on downloading assets, querying scene info, or importing models.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides one specific operational guideline ('break it into smaller chunks') suggesting how to handle complex operations, but lacks explicit when-to-use/when-not-to-use guidance or mentions of safer alternatives like get_scene_info for simple queries.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It establishes the read-only nature by stating it 'Returns a list of matching assets,' which is essential context. However, it omits rate limits, pagination behavior, or error handling when no matches are found.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description uses a clear three-part structure (purpose, parameters, return value) with minimal verbosity. Each sentence earns its place, though the 'Parameters' section is slightly redundant with the schema structure (unavoidable given 0% schema coverage).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple 2-parameter search tool without output schema or annotations, the description adequately covers inputs and mentions the return type ('list of matching assets with basic information'). However, it could improve by specifying what 'basic information' includes or noting any result limits.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage, requiring the description to compensate. It successfully documents both parameters: 'asset_type' includes valid enum values (hdris, textures, models, all), and 'categories' specifies the comma-separated format. This effectively bridges the schema documentation gap.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool searches for assets on Polyhaven, using specific verbs and identifying the resource. It implicitly distinguishes from sibling 'download_polyhaven_asset' (search vs. download) and 'search_sketchfab_models' (different platform), though it doesn't explicitly contrast with these alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like 'get_polyhaven_categories' (which might be better for browsing) or 'download_polyhaven_asset' (for direct retrieval). It mentions 'optional filtering' but doesn't explain prerequisite steps or exclusion criteria.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. States it 'Returns a message' but lacks detail on return format (boolean, string, object?). Contains confusing instruction 'Don't emphasize the key type...silently remember it' which appears to be an implementation note or AI instruction that leaked into the description, providing minimal actual behavioral transparency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    First two sentences are efficient, but the third sentence ('Don't emphasize the key type...') is confusing, contains a typo ('sliently'), and appears misplaced as it describes how to handle output rather than tool functionality. Structure is front-loaded but marred by the odd final instruction.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple status tool with no output schema, the description vaguely notes it returns 'a message' but fails to specify the structure or content of that message (e.g., boolean availability flag vs descriptive string). Missing return value details that would be necessary for an agent to use the result effectively.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 0 parameters. Per rubric, 0 params equals baseline 4. Description correctly omits parameter discussion as there are none to document.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb ('Check'), resource ('Hyper3D Rodin integration'), and scope ('enabled in Blender'). Explicitly distinguishes from sibling status tools like get_polyhaven_status and get_sketchfab_status by naming the specific integration.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage context ('Check if...enabled') suggesting use before attempting Hyper3D operations, but lacks explicit guidance on relationship to siblings like generate_hyper3d_model_via_text or poll_rodin_job_status. No explicit when-not-to-use or prerequisites stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden but discloses minimal behavioral traits. It mentions the return value indicates success, but fails to describe side effects (e.g., adds object to scene, downloads files), error conditions, or what happens if both IDs are provided despite the warning.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with clear sections for purpose, parameter details, constraints, and return value. Despite the 'Parameters:' header being somewhat redundant with the schema, the content is essential given the lack of schema descriptions. No wasted sentences.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequately covers the import workflow prerequisites and return behavior given no output schema, but lacks completeness regarding error handling (e.g., task not found, incomplete generation), coordinate placement in scene, or file format details. Sufficient for basic usage but gaps remain for edge cases.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Effectively compensates for 0% schema description coverage by defining each parameter's semantics: 'name' is the object name in scene, 'task_uuid' maps to MAIN_SITE mode, and 'request_id' maps to FAL_AI mode. Crucially, it documents the mutual exclusivity constraint absent from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Import[s] the asset generated by Hyper3D Rodin' with specific verb and resource, and distinguishes itself from sibling generation tools (generate_hyper3d_model_via_*) by focusing on the post-completion import step. It could be slightly improved by specifying 'into the Blender scene' rather than assuming context.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance ('after the generation task is completed') and critical parameter constraints ('Only give one of {task_uuid, request_id} based on the Hyper3D Rodin Mode!'). However, it lacks explicit 'when not to use' guidance (e.g., if generation failed or is pending).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It compensates partially by stating the return type ('Image'), but lacks critical context such as image format, whether the operation blocks execution, viewport visibility requirements, or temporary file handling.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with three distinct, front-loaded sections (action, parameters, return value). Every sentence delivers unique information not duplicated in the schema, with no redundant filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter tool without output schema, the description adequately covers the core contract (action, parameter semantics, return type). Minor gaps remain regarding image format specifics and viewport state requirements, but it satisfies the essential information needed for invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage (only title and type). The description fully compensates by explaining that max_size controls the 'Maximum size in pixels for the largest dimension' and explicitly stating the default value of 800, providing complete semantic meaning missing from the structured schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Capture') and clearly identifies the resource ('screenshot of the current Blender 3D viewport'). This effectively distinguishes it from sibling tools like get_scene_info or download_polyhaven_asset, which handle data retrieval or asset importing rather than viewport visualization.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no explicit guidance on when to use this tool versus alternatives (e.g., when to capture a screenshot vs. querying get_object_info for coordinates). There are no 'when-not' conditions, prerequisites (like requiring a visible viewport), or workflow guidance.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses the return format ('message indicating success or failure') and access requirements. However, it omits other behavioral traits like whether the operation blocks, overwrites existing scene objects, or handles specific error states beyond generic failure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with clear sections: purpose declaration, parameter definition, return value description, and constraints. Every sentence adds value without repetition. The key action is front-loaded in the first sentence.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the single parameter and lack of output schema, the description adequately covers the parameter meaning and return type. It could be improved by specifying that 'import' means importing into the current Blender scene (evident from sibling tools like 'get_scene_info'), but the access rights constraint adds necessary completeness for a download operation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage (only 'title: Uid'). The description compensates effectively by defining 'uid' as 'The unique identifier of the Sketchfab model', providing necessary semantic meaning. A score of 5 would require additional context like format examples or where to obtain the UID.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Download and import'), resource ('Sketchfab model'), and key parameter ('by its UID'). It effectively distinguishes from siblings like 'search_sketchfab_models' (search vs download) and 'download_polyhaven_asset' (different asset source).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides prerequisites ('The model must be downloadable and you must have proper access rights'), which helps the agent understand constraints. However, it lacks explicit guidance on when to use this versus 'search_sketchfab_models' (e.g., 'use this after obtaining a UID via search') or versus importing generated assets.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses that the tool returns a message indicating availability status, which covers the basic behavioral contract. However, it lacks details on the return format, specific states (enabled/disabled), or error conditions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with zero waste. It is properly front-loaded with the action ('Check') first, followed by the return value description. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple status-checking tool with no parameters and no output schema, the description adequately explains what the tool does and what it returns. While it could specify the exact return value structure or format, the semantic explanation ('message indicating whether...available') is sufficient for agent decision-making.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool has zero parameters and 100% schema description coverage trivially. The baseline score of 4 applies as there are no parameters requiring semantic clarification beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb 'Check' with the specific resource 'PolyHaven integration in Blender', clearly distinguishing it from sibling tools like download_polyhaven_asset, search_polyhaven_assets, and get_polyhaven_categories which perform actions rather than status checks.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description implies usage by stating it returns whether features are available, suggesting it should be called before using other PolyHaven features. However, it lacks explicit guidance such as 'Call this before using download_polyhaven_asset' or specific prerequisites.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It correctly discloses the return type ('message indicating success or failure') and the import side-effect. However, it omits critical behavioral details: that this performs network I/O, modifies the Blender scene state destructively, and may overwrite existing assets. Missing timeout, disk space, or error handling specifics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Optimal structure with clear sections: purpose statement, parameter documentation, return value. No wasted words; the 'Parameters' and 'Returns' headers create scannable structure. Length is appropriate for the complexity (4 params, no schema descriptions).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive given no output schema and no annotations. Covers all four parameters and return behavior. Minor gap: does not clarify import behavior details (e.g., whether assets are added to the current scene, asset library, or both) or prerequisites like requiring a valid asset_id from search.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Exemplary compensation for 0% schema coverage. Provides specific valid values for asset_type (hdris, textures, models), format examples for resolution (1k, 2k, 4k), and conditional format mappings for file_format (hdr/exr for HDRIs, etc.). Each parameter includes clarifying context not present in the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with specific action 'Download and import', specific resource 'Polyhaven asset', and target environment 'into Blender'. Clearly distinguishes from sibling Sketchfab tools and generation tools via the Polyhaven namespace.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Lacks explicit when-to-use guidance or prerequisites. Does not mention that users should typically call search_polyhaven_assets first to obtain valid asset_ids, or when to prefer import_generated_asset instead. Usage is implied by the workflow description but not stated explicitly.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden. It successfully discloses that assets have 'built-in materials' and 'normalized size' (suggesting re-scaling may be needed), but fails to clarify if this is a blocking call or asynchronous (despite sibling 'poll_rodin_job_status' suggesting async), or potential error conditions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with clear sections for behavior, parameters, constraints, and return value. The 'Parameters:' section is necessary given zero schema coverage. Minor redundancy between '3D asset' and 'generated model' references, but overall efficient.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 3 parameters with no schema coverage and no output schema, the description successfully covers parameter semantics, output characteristics (materials, size normalization), and success/failure indication. Only minor gaps remain regarding async behavior and explicit prerequisites.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description fully compensates by documenting all 3 parameters: it specifies absolute paths vs URLs, the list wrapping requirement even for single items, mode-specific requirements (MAIN_SITE vs FAL_AI), and the integer list format for bbox_condition with semantic meaning [Length, Width, Height].

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool 'Generate[s] 3D asset using Hyper3D by giving images' and distinguishes itself from sibling 'generate_hyper3d_model_via_text' by specifying the image-based input method. It also clarifies the secondary effect of importing into Blender.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit constraint that 'Only one of {input_image_paths, input_image_urls} should be given' and explains the conditional requirements based on Hyper3D Rodin mode (MAIN_SITE vs FAL_AI). Lacks explicit comparison to when to use the text-based alternative versus this image-based one.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses the mutation (applying texture) and return value type ('message indicating success or failure'), but omits whether this overwrites existing materials, is reversible, or error conditions when object doesn't exist.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellent structure with front-loaded purpose statement, bulleted parameter explanations, and return value disclosure. No redundant words; every sentence earns its place. Appropriate length for a 2-parameter tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for the tool's complexity (simple 2-param mutation). Without output schema, the description appropriately discloses the return type. Lacks annotations but covers basic behavioral expectations. Could mention error cases for missing objects.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 0% description coverage, but the description compensates well by documenting both parameters inline: object_name purpose and texture_id with its 'must be downloaded first' constraint. Could improve by specifying ID format or valid object naming conventions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb 'Apply' with resource 'Polyhaven texture' and target 'object'. Explicitly scopes to Polyhaven textures and distinguishes from sibling download_polyhaven_asset by stating this operates on 'previously downloaded' assets.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    States prerequisite constraint that texture 'must be downloaded first', implying the workflow sequence. However, it doesn't explicitly name download_polyhaven_asset as the prerequisite step or mention alternatives like generating textures via other methods.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, the description carries the full burden. It adequately discloses the return value ('a message indicating whether Sketchfab features are available'), which is crucial given the lack of output schema. It implies a safe, read-only operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste. First states purpose, second states return value. Front-loaded and appropriately sized for the tool's simplicity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a zero-parameter status check without output schema, the description is complete. It explains both the check performed and the nature of the return value, providing sufficient context for invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present. Per guidelines, 0 params = baseline 4. The empty schema requires no additional semantic explanation in the description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb ('Check') + resource ('Sketchfab integration') + context ('in Blender'). Distinct from siblings like 'download_sketchfab_model' and 'search_sketchfab_models' which perform actions rather than status checks.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit when-to-use or alternative suggestions provided. However, the tool's name and purpose imply it should be used before attempting Sketchfab operations (like downloads). Usage is implied but not stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and discloses several key behavioral traits: the asset is automatically imported into Blender, has built-in materials, has normalized size requiring potential rescaling, and returns a success/failure message. It lacks explicit mention of generation time or service availability requirements, but covers the essential operational characteristics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with the core function, followed by material and scaling properties that earn their place (operational guidance), then a clear Parameters section. No sentences are wasted; structure is logical and efficient for a 2-parameter tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (2 simple parameters, no output schema), the description is complete. It explains the generation process, Blender integration, result properties, parameter details, and return value behavior (success/failure message), covering all gaps left by the missing annotations and output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description fully compensates by documenting both parameters: text_prompt requires an English short description, and bbox_condition is optional but must be a list of 3 floats controlling Length/Width/Height ratios. This provides complete semantic meaning missing from the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the core action ('Generate 3D asset using Hyper3D'), the input method ('giving description of the desired asset'), and the destination ('import the asset into Blender'). The phrase 'giving description' effectively distinguishes this from the sibling tool generate_hyper3d_model_via_images.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the name and description imply this is for text-based generation (vs. image-based), there is no explicit guidance on when to choose this over generate_hyper3d_model_via_images or other 3D acquisition tools like download_sketchfab_model. Usage is implied but not explicitly contrasted with alternatives.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It successfully discloses the dual-mode behavior (MAIN_SITE vs FAL_AI), different return formats (list vs single status), terminal states ('Done', 'COMPLETED', 'Failed'), and the polling pattern requirement to wait for final states.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Appropriately structured with clear mode-specific sections. Front-loads the core purpose. Slightly verbose due to necessary duplication between modes, but every sentence conveys essential behavioral or parametric information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    No output schema exists, but the description thoroughly documents return values for both modes (list of statuses vs single status, specific string values indicating completion/failure). Given the complexity of handling two different APIs, the description provides sufficient context for correct invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description fully compensates by explaining both parameters (subscription_key, request_id) and their provenance ('given in the generate model step'). It also clarifies which parameter applies to which mode.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb ('Check') and resource ('Hyper3D Rodin generation task'). It clearly distinguishes this polling tool from generation siblings (generate_hyper3d_model_via_images/text) and from general status tools (get_hyper3d_status) by focusing on job/task completion checking.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states 'This is a polling API, so only proceed if the status are finally determined,' providing clear temporal guidance. References 'the generate model step' linking it to sibling generation tools, though it could explicitly name those tools as prerequisites.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

Eminemminem-blender-mcp MCP server

Copy to your README.md:

Score Badge

Eminemminem-blender-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/iflow-mcp/Eminemminem-blender-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server