Skip to main content
Glama

stage_export_scene

Export 3D scenes from the MCP server to React Three Fiber components, Remotion projects, glTF files, or JSON data for rendering and integration.

Instructions

Export scene to R3F, Remotion, or glTF format.

Converts the scene definition into code/files that can be used
with React Three Fiber or Remotion for rendering.

Args:
    scene_id: Scene identifier
    format: Export format - "r3f-component", "remotion-project", "gltf", "json"
    output_path: Optional VFS path for output (auto-generated if None)

Returns:
    ExportSceneResponse with output paths

Tips for LLMs:
    - "r3f-component": Generate React Three Fiber .tsx files
    - "remotion-project": Full Remotion project with package.json
    - "gltf": Static 3D scene file
    - "json": Raw scene JSON data
    - Exported files are in the scene's VFS workspace
    - Use chuk-artifacts to retrieve exported files

Example:
    result = await stage_export_scene(
        scene_id=scene_id,
        format="remotion-project"
    )
    print(f"Exported to {result.output_path}")
    # Files available at result.artifacts paths

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
scene_idYes
formatNor3f-component
output_pathNo

Implementation Reference

  • Main handler function stage_export_scene decorated with @tool and @requires_auth(). Handles exporting scenes to different formats (R3F, Remotion, glTF, JSON) using SceneExporter.
    @requires_auth()
    @tool  # type: ignore[arg-type]
    async def stage_export_scene(
        scene_id: str, format: str = "r3f-component", output_path: Optional[str] = None
    ) -> ExportSceneResponse:
        """Export scene to R3F, Remotion, or glTF format.
    
        Converts the scene definition into code/files that can be used
        with React Three Fiber or Remotion for rendering.
    
        Args:
            scene_id: Scene identifier
            format: Export format - "r3f-component", "remotion-project", "gltf", "json"
            output_path: Optional VFS path for output (auto-generated if None)
    
        Returns:
            ExportSceneResponse with output paths
    
        Tips for LLMs:
            - "r3f-component": Generate React Three Fiber .tsx files
            - "remotion-project": Full Remotion project with package.json
            - "gltf": Static 3D scene file
            - "json": Raw scene JSON data
            - Exported files are in the scene's VFS workspace
            - Use chuk-artifacts to retrieve exported files
    
        Example:
            result = await stage_export_scene(
                scene_id=scene_id,
                format="remotion-project"
            )
            print(f"Exported to {result.output_path}")
            # Files available at result.artifacts paths
        """
        manager = get_scene_manager()
        scene = await manager.get_scene(scene_id)
        vfs = await manager.get_scene_vfs(scene_id)
    
        export_format = ExportFormat(format)
    
        # Use exporter
        artifacts = await SceneExporter.export_scene(scene, export_format, vfs, output_path)
    
        # Determine main output path
        main_path = artifacts.get("scene") or artifacts.get("component") or artifacts.get("gltf") or "/"
    
        return ExportSceneResponse(
            scene_id=scene_id, format=export_format, output_path=main_path, artifacts=artifacts
        )
  • SceneExporter.export_scene static method that routes to format-specific export methods (_export_json, _export_r3f, _export_remotion, _export_gltf) based on the ExportFormat enum.
    async def export_scene(
        scene: Scene,
        format: ExportFormat,
        vfs,
        output_path: Optional[str] = None,
    ) -> dict[str, str]:
        """Export scene to specified format.
    
        Args:
            scene: Scene to export
            format: Export format
            vfs: VFS instance for writing files
            output_path: Optional output path override
    
        Returns:
            Dict of generated file paths
        """
        if format == ExportFormat.JSON:
            return await SceneExporter._export_json(scene, vfs, output_path)
        elif format == ExportFormat.R3F_COMPONENT:
            return await SceneExporter._export_r3f(scene, vfs, output_path)
        elif format == ExportFormat.REMOTION_PROJECT:
            return await SceneExporter._export_remotion(scene, vfs, output_path)
        elif format == ExportFormat.GLTF:
            return await SceneExporter._export_gltf(scene, vfs, output_path)
        else:
            raise ValueError(f"Unsupported export format: {format}")
  • ExportSceneResponse schema defining the response structure with scene_id, format (ExportFormat enum), output_path, and artifacts dict.
    class ExportSceneResponse(BaseModel):
        """Response from exporting scene."""
    
        scene_id: str
        format: ExportFormat
        output_path: str  # VFS path to exported content
        artifacts: dict[str, str] = Field(
            default_factory=dict, description="Additional generated files"
        )
        message: str = "Scene exported successfully"
  • ExportFormat enum defining the supported export formats: R3F_COMPONENT, REMOTION_PROJECT, GLTF, and JSON.
    class ExportFormat(str, Enum):
        """Export template formats."""
    
        R3F_COMPONENT = "r3f-component"  # React Three Fiber component
        REMOTION_PROJECT = "remotion-project"  # Full Remotion project
        GLTF = "gltf"  # Static glTF scene
        JSON = "json"  # Raw JSON scene data
  • SceneExporter class with helper methods for exporting scenes including _ensure_directory utility and format-specific export implementations.
    class SceneExporter:
        """Exports scenes to various formats."""
    
        @staticmethod
        async def _ensure_directory(vfs, path: str) -> None:
            """Ensure directory exists by creating all parent directories.
    
            Args:
                vfs: VFS instance
                path: Directory path to create
            """
            if path == "/" or not path:
                return
    
            parts = [p for p in path.split("/") if p]
            current = ""
            for part in parts:
                current = f"{current}/{part}"
                await vfs.mkdir(current)
    
        @staticmethod
        async def export_scene(
            scene: Scene,
            format: ExportFormat,
            vfs,
            output_path: Optional[str] = None,
        ) -> dict[str, str]:
            """Export scene to specified format.
    
            Args:
                scene: Scene to export
                format: Export format
                vfs: VFS instance for writing files
                output_path: Optional output path override
    
            Returns:
                Dict of generated file paths
            """
            if format == ExportFormat.JSON:
                return await SceneExporter._export_json(scene, vfs, output_path)
            elif format == ExportFormat.R3F_COMPONENT:
                return await SceneExporter._export_r3f(scene, vfs, output_path)
            elif format == ExportFormat.REMOTION_PROJECT:
                return await SceneExporter._export_remotion(scene, vfs, output_path)
            elif format == ExportFormat.GLTF:
                return await SceneExporter._export_gltf(scene, vfs, output_path)
            else:
                raise ValueError(f"Unsupported export format: {format}")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it explains what the tool does (converts scenes to code/files), where outputs go (VFS workspace), and how to retrieve them (via chuk-artifacts). It also hints at side effects (file generation) and provides an example of usage and response handling, though it doesn't cover permissions, rate limits, or error cases.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with a clear purpose statement. The 'Args' and 'Returns' sections are structured for readability, and the 'Tips for LLMs' and 'Example' add value without redundancy. However, the example could be slightly more concise, and some tips might be integrated earlier, but overall it's efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no annotations, no output schema), the description is largely complete: it covers purpose, parameters, usage tips, and an example. It explains the return type (ExportSceneResponse with output paths) and how to handle outputs. A minor gap is the lack of explicit error handling or performance details, but for a tool with good parameter and behavioral coverage, it's sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains each parameter: scene_id as 'Scene identifier', format with detailed options and use cases in 'Tips for LLMs', and output_path with its optional nature and auto-generation behavior. This fully compensates for the schema's lack of descriptions, providing clear semantics for all three parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Export', 'Converts') and resources ('scene', 'code/files'), distinguishing it from sibling tools like stage_get_scene or stage_create_scene by focusing on output generation rather than retrieval or creation. It explicitly lists the target formats (R3F, Remotion, glTF) and use cases (React Three Fiber, Remotion rendering).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to convert scenes for rendering in specific frameworks) but does not explicitly state when not to use it or name alternatives among siblings. The 'Tips for LLMs' section implies usage scenarios for each format, helping guide selection based on output needs, though it lacks direct comparison to other tools like stage_get_scene for raw data access.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chrishayuk/chuk-mcp-stage'

If you have feedback or need assistance with the MCP directory API, please join our Discord server