Skip to main content
Glama

list_report_images

Read-onlyIdempotent

List report images from ORFS runs organized by stage to visualize results for a given platform, design, and run slug.

Instructions

List available report images from ORFS runs organized by stage.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
platformYes
designYes
run_slugYes
stageNoall

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Core implementation of ListReportImagesTool: validates run path, discovers .webp files, classifies by stage, and returns organized list of ImageInfo objects.
    class ListReportImagesTool(BaseTool):
        """Tool for listing available report images from ORFS runs."""
    
        async def execute(self, platform: str, design: str, run_slug: str, stage: str = "all") -> str:
            """List available report images for a specific run."""
            try:
                reports_base, run_path = _resolve_run_path(platform, design, run_slug)
    
                if not run_path.exists():
                    logger.warning(f"Run slug not found: {run_slug}")
                    available_runs = [d.name for d in reports_base.iterdir() if d.is_dir()]
                    return self._format_result(
                        ListImagesResult(
                            error="RunSlugNotFound",
                            message=f"Run slug '{run_slug}' not found in {reports_base}. "
                            f"Available run slugs: {', '.join(sorted(available_runs)[:5]) if available_runs else 'none'}",
                        )
                    )
    
                webp_files = list(run_path.rglob("*.webp"))
    
                if not webp_files:
                    logger.warning(f"No webp images found in {run_path}")
                    return self._format_result(
                        ListImagesResult(
                            run_path=str(run_path),
                            total_images=0,
                            images_by_stage={},
                            message=f"No webp images found in {run_path}",
                        )
                    )
    
                images_by_stage: dict[str, list[ImageInfo]] = {}
    
                for webp_file in webp_files:
                    file_stage, file_type = classify_image_type(webp_file.name)
    
                    if stage != "all" and file_stage != stage:
                        continue
    
                    stat = webp_file.stat()
    
                    image_info = ImageInfo(
                        filename=webp_file.name,
                        path=str(webp_file),
                        size_bytes=stat.st_size,
                        modified_time=datetime.fromtimestamp(stat.st_mtime).isoformat(),
                        type=file_type,
                    )
    
                    if file_stage not in images_by_stage:
                        images_by_stage[file_stage] = []
                    images_by_stage[file_stage].append(image_info)
    
                for stage_images in images_by_stage.values():
                    stage_images.sort(key=lambda x: x.filename)
    
                total_images = sum(len(images) for images in images_by_stage.values())
    
                result = ListImagesResult(
                    run_path=str(run_path),
                    total_images=total_images,
                    images_by_stage=images_by_stage,
                )
    
                return self._format_result(result)
            except ValidationError as e:
                return self._format_result(ListImagesResult(error=type(e).__name__, message=str(e)))
            except Exception as e:
                logger.exception(f"Failed to list report images: {e}")
                return self._format_result(
                    ListImagesResult(
                        error="UnexpectedError",
                        message=f"Failed to list report images: {str(e)}",
                    )
                )
  • MCP tool registration using @mcp.tool decorator: maps the 'list_report_images' tool name to the handler.
    # Report image tools
    @mcp.tool(
        annotations=ToolAnnotations(
            readOnlyHint=True,
            destructiveHint=False,
            idempotentHint=True,
            openWorldHint=False,
        )
    )
    async def list_report_images(platform: str, design: str, run_slug: str, stage: str = "all") -> str:
        """List available report images from ORFS runs organized by stage."""
        return await list_report_images_tool.execute(platform, design, run_slug, stage)
  • ImageInfo model: a single image entry (filename, path, size, modified_time, type).
    class ImageInfo(BaseModel):
        """Information about a single report image."""
    
        filename: str
        path: str
        size_bytes: int
        modified_time: str
        type: str
  • ListImagesResult model: the top-level result shape with run_path, total_images, and images_by_stage.
    class ListImagesResult(BaseResult):
        """Result from listing report images."""
    
        run_path: str | None = None
        total_images: int | None = None
        images_by_stage: dict[str, list[ImageInfo]] | None = None
        message: str | None = None
  • Image type mapping and classify_image_type helper: maps filenames to stage/type categories.
    IMAGE_TYPE_MAPPING = {
        "cts_clk": "clock_visualization",
        "cts_clk_layout": "clock_layout",
        "cts_core_clock": "core_clock_visualization",
        "cts_core_clock_layout": "core_clock_layout",
        "final_all": "complete_design",
        "final_clocks": "clock_routing",
        "final_congestion": "congestion_heatmap",
        "final_ir_drop": "ir_drop_analysis",
        "final_placement": "cell_placement",
        "final_resizer": "resizer_results",
        "final_routing": "routing_visualization",
    }
    
    
    def classify_image_type(filename: str) -> tuple[str, str]:
        """Classify image by stage and type based on filename."""
        base_name = filename.rsplit(".", 1)[0]
    
        stage = base_name.split("_")[0] if "_" in base_name else "unknown"
    
        image_type = IMAGE_TYPE_MAPPING.get(base_name, "unknown")
    
        return stage, image_type
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, so the safe read behavior is known. The description adds 'organized by stage' but lacks details about pagination, performance, or edge cases. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single sentence, front-loaded with the verb and resource. It is concise but could be slightly expanded to cover parameters without losing brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters (3 required) and no parameter descriptions in the schema, the description is too minimal. It does not explain what the output contains (though an output schema exists) or how to effectively use the required parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It only hints at the 'stage' parameter ('organized by stage') and ignores 'platform', 'design', and 'run_slug'. Gives no help on how to construct valid inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the resource 'report images from ORFS runs organized by stage', which distinguishes it from the sibling 'read_report_image' tool. It effectively communicates what the tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The description does not mention when-not to use it or suggest siblings like 'read_report_image' for specific needs. The agent must infer usage from context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/luarss/openroad-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server