Skip to main content
Glama
NellyW8

EDA Tools MCP Server

by NellyW8

read_openlane_reports

Extract and analyze OpenLane report files to examine PPA metrics, timing, routing quality, and design results for LLM-based assessment.

Instructions

Read OpenLane report files for LLM analysis. Returns all reports or specific category for detailed analysis of PPA metrics, timing, routing quality, and other design results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID from OpenLane run
report_typeNoSpecific report category to read (synthesis, placement, routing, final, etc.). Leave empty to read all reports.

Implementation Reference

  • The core handler function in EDAServer class that implements the read_openlane_reports tool. It locates the latest OpenLane run for the project, reads key report files (synthesis stats, timing reports, final summary), parses PPA metrics like cell count and timing slack, assesses design status, and returns structured JSON with excerpts and analysis.
    async readOpenlaneReports(projectId: string, reportType?: string): Promise<string> {
      try {
        const project = this.projects.get(projectId);
        if (!project) {
          return JSON.stringify({
            success: false,
            error: `Project ${projectId} not found.`,
          }, null, 2);
        }
    
        const runsDir = join(project.dir, "runs");
        let latestRun = "";
        
        try {
          const runs = await fs.readdir(runsDir);
          if (runs.length === 0) {
            return JSON.stringify({
              success: false,
              error: "No OpenLane runs found. Run OpenLane flow first.",
            }, null, 2);
          }
          latestRun = runs.sort().reverse()[0];
        } catch {
          return JSON.stringify({
            success: false,
            error: "No runs directory found. Run OpenLane flow first.",
          }, null, 2);
        }
    
        const reportsDir = join(project.dir, "runs", latestRun, "reports");
        const finalDir = join(project.dir, "runs", latestRun, "final");
        
        // Simple results object
        const results: any = {
          project_id: projectId,
          run_id: latestRun,
          success: true,
          ppa_metrics: {
            power_mw: null,
            max_frequency_mhz: null,
            total_cells: null,
            logic_area_um2: null,
            timing_slack_ns: null
          },
          design_status: {
            synthesis_complete: false,
            timing_clean: false,
            routing_complete: false
          },
          reports: {}
        };
    
        // Helper to safely read file
        const readFile = async (path: string) => {
          try {
            return await fs.readFile(path, 'utf8');
          } catch {
            return null;
          }
        };
    
        // Read synthesis report
        const synthReport = await readFile(join(reportsDir, "synthesis", "1-synthesis.stat.rpt"));
        if (synthReport) {
          results.design_status.synthesis_complete = true;
          results.reports.synthesis = synthReport.substring(0, 2000);
          
          const cellMatch = synthReport.match(/Number of cells:\s*(\d+)/);
          if (cellMatch) {
            results.ppa_metrics.total_cells = parseInt(cellMatch[1]);
          }
        }
    
        // Read timing report
        try {
          const routingDir = join(reportsDir, "routing");
          const files = await fs.readdir(routingDir);
          
          for (const file of files) {
            if (file.includes('sta') || file.includes('timing')) {
              const timingReport = await readFile(join(routingDir, file));
              if (timingReport) {
                results.reports.timing = timingReport.substring(0, 2000);
                
                const wnsMatch = timingReport.match(/WNS.*?(-?\d+\.?\d*)/i);
                if (wnsMatch) {
                  const wns = parseFloat(wnsMatch[1]);
                  results.ppa_metrics.timing_slack_ns = wns;
                  results.design_status.timing_clean = wns >= 0;
                }
                break;
              }
            }
          }
        } catch {
          // Timing reports not available
        }
    
        // Read final summary if available
        const finalSummary = await readFile(join(finalDir, "final.summary.rpt"));
        if (finalSummary) {
          results.reports.final_summary = finalSummary.substring(0, 3000);
          results.design_status.routing_complete = true;
        }
    
        // Add analysis summary
        const issues = [];
        if (!results.design_status.synthesis_complete) issues.push("Synthesis incomplete");
        if (!results.design_status.timing_clean) issues.push("Timing violations detected");
        if (!results.design_status.routing_complete) issues.push("Routing incomplete");
    
        results.summary = {
          status: issues.length === 0 ? "SUCCESS" : "ISSUES_FOUND",
          issues: issues,
          note: "PPA metrics and design status extracted from OpenLane reports"
        };
    
        return JSON.stringify(results, null, 2);
    
      } catch (error: any) {
        return JSON.stringify({
          success: false,
          error: error.message || String(error),
        }, null, 2);
      }
    }
  • Input schema for the read_openlane_reports tool, defining required 'project_id' and optional 'report_type' parameters.
    type: "object",
    properties: {
        project_id: { 
        type: "string", 
        description: "Project ID from OpenLane run" 
        },
        report_type: { 
        type: "string", 
        description: "Specific report category to read (synthesis, placement, routing, final, etc.). Leave empty to read all reports.",
        default: ""
        },
    },
    required: ["project_id"],
    },
  • src/index.ts:840-857 (registration)
    Tool registration in the ListTools response, defining name, description, and input schema.
        name: "read_openlane_reports",
        description: "Read OpenLane report files for LLM analysis. Returns all reports or specific category for detailed analysis of PPA metrics, timing, routing quality, and other design results.",
        inputSchema: {
        type: "object",
        properties: {
            project_id: { 
            type: "string", 
            description: "Project ID from OpenLane run" 
            },
            report_type: { 
            type: "string", 
            description: "Specific report category to read (synthesis, placement, routing, final, etc.). Leave empty to read all reports.",
            default: ""
            },
        },
        required: ["project_id"],
        },
    },
  • src/index.ts:929-939 (registration)
    Dispatcher case in CallToolRequestHandler that validates parameters and invokes the readOpenlaneReports handler.
    case "read_openlane_reports": {
        const projectId = validateRequiredString(args, "project_id", name);
        const reportType = getStringProperty(args, "report_type", "");
        
        return {
        content: [{
            type: "text",
            text: await edaServer.readOpenlaneReports(projectId, reportType || undefined),
        }],
        };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool reads reports and returns data for analysis, but lacks details on permissions needed, rate limits, error handling, or whether it's read-only (implied by 'read' but not explicit). For a tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that efficiently convey purpose and scope. It's front-loaded with the main function ('Read OpenLane report files for LLM analysis') and follows with additional context. No wasted words, though it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 2 parameters with full schema coverage and no output schema, the description adequately covers the tool's purpose and general use. However, as a read operation with no annotations, it should ideally mention safety (e.g., read-only) or data format expectations. The lack of output schema means the description doesn't explain return values, which is a gap for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('project_id' and 'report_type') with clear descriptions. The description adds marginal value by mentioning 'specific category' and 'detailed analysis of PPA metrics, timing, routing quality', which aligns with the schema but doesn't provide additional syntax or format details beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Read OpenLane report files for LLM analysis' specifies the verb (read) and resource (OpenLane report files). It distinguishes from siblings like 'run_openlane' or 'view_gds' by focusing on report analysis rather than execution or visualization. However, it doesn't explicitly differentiate from 'view_waveform' which might also involve reading data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('for LLM analysis' and 'detailed analysis of PPA metrics, timing, routing quality') but doesn't explicitly state when to use this tool versus alternatives like 'run_openlane' for execution or 'view_gds' for visualization. No exclusions or prerequisites are mentioned, leaving usage guidance at an implied level.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NellyW8/mcp-EDA'

If you have feedback or need assistance with the MCP directory API, please join our Discord server