Skip to main content
Glama

get_user_view

Capture the current Blender viewport as an image to show what the user is seeing in the 3D workspace.

Instructions

Capture and return the current Blender viewport as an image.
Shows what the user is currently seeing in Blender.

Focus mostly on the 3D viewport. Use the UI to assist in your understanding of the scene but only refer to it if specifically prompted.

Args:
    max_dimension: Maximum dimension (width or height) in pixels for the returned image
    compression_quality: Image compression quality (1-100, higher is better quality but larger)

Returns:
    An image of the current Blender viewport

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function implementing the 'get_user_view' MCP tool. It connects to Blender, requests the current viewport image via 'get_current_view' command, decodes the base64 image data, optionally resizes and compresses it using PIL for efficiency, and returns an MCP Image object.
    def get_user_view() -> Image:
        """
        Capture and return the current Blender viewport as an image.
        Shows what the user is currently seeing in Blender.
    
        Focus mostly on the 3D viewport. Use the UI to assist in your understanding of the scene but only refer to it if specifically prompted.
        
        Args:
            max_dimension: Maximum dimension (width or height) in pixels for the returned image
            compression_quality: Image compression quality (1-100, higher is better quality but larger)
        
        Returns:
            An image of the current Blender viewport
        """
        max_dimension = 800
        compression_quality = 85
    
        # Use PIL to compress the image
        from PIL import Image as PILImage
        import io
    
        try:
            # Get the global connection
            blender = get_blender_connection()
            
            # Request current view
            result = blender.send_command("get_current_view")
            
            if "error" in result:
                # logger.error(f"Error getting view from Blender: {result.get('error')}")
                raise Exception(f"Error getting current view: {result.get('error')}")
            
            # Extract image information
            if "data" not in result or "width" not in result or "height" not in result:
                # logger.error("Incomplete image data returned from Blender")
                raise Exception("Incomplete image data returned from Blender")
            
            # Decode the base64 image data
            image_data = base64.b64decode(result["data"])
            original_width = result["width"]
            original_height = result["height"]
            original_format = result.get("format", "png")
            
            # Compression is only needed if the image is large
            if original_width > 800 or original_height > 800 or len(image_data) > 1000000:
                # logger.info(f"Compressing image (original size: {len(image_data)} bytes)")
                
                # Open image from binary data
                img = PILImage.open(io.BytesIO(image_data))
                
                # Resize if needed
                if original_width > max_dimension or original_height > max_dimension:
                    # Calculate new dimensions maintaining aspect ratio
                    if original_width > original_height:
                        new_width = max_dimension
                        new_height = int(original_height * (max_dimension / original_width))
                    else:
                        new_height = max_dimension
                        new_width = int(original_width * (max_dimension / original_height))
                    
                    # Resize using high-quality resampling
                    img = img.resize((new_width, new_height), PILImage.Resampling.LANCZOS)
                
                # Convert to RGB if needed
                if img.mode == 'RGBA':
                    img = img.convert('RGB')
                
                # Save as JPEG with compression
                output = io.BytesIO()
                img.save(output, format='JPEG', quality=compression_quality, optimize=True)
                compressed_data = output.getvalue()
    
                # logger.info(f"Image compressed from {len(image_data)} to {len(compressed_data)} bytes")
                
                # Return compressed image
                return Image(data=compressed_data, format="jpeg")
            else:
                # Image is small enough, return as-is
                return Image(data=image_data, format=original_format)
                
        except Exception as e:
            # logger.error(f"Error processing viewport image: {str(e)}")
            raise Exception(f"Error processing viewport image: {str(e)}")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the tool captures the viewport as an image and mentions focusing on the 3D viewport with UI assistance only when prompted. However, it doesn't describe error conditions, performance characteristics, or what happens if Blender isn't in a viewport state.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with purpose statement, usage context, parameter details, and return value. Every sentence adds value: the first states the core function, the second clarifies scope, the third provides UI guidance, and the parameter/return sections document behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations, no output schema, and 0 parameters in the schema, the description does well by explaining what it captures, how to use it, and documenting two important parameters. It could be more complete by describing the image format or error cases, but it covers the essentials adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so the baseline is 4. The description includes an 'Args' section with two parameters (max_dimension, compression_quality) that aren't in the schema, adding value beyond the structured data by documenting these optional parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Capture and return the current Blender viewport as an image' with specific verb ('capture and return') and resource ('current Blender viewport'). It distinguishes from siblings by focusing on the visual viewport rather than IFC data, code execution, or exports.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context: 'Shows what the user is currently seeing in Blender' and 'Focus mostly on the 3D viewport.' It gives guidance on when to use it (for viewport capture) but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/JotaDeRodriguez/Bonsai_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server