Skip to main content
Glama
NellyW8

EDA Tools MCP Server

by NellyW8

read_openlane_reports

Extract and analyze OpenLane design reports to review PPA metrics, timing, routing quality, and other results for LLM processing.

Instructions

Read OpenLane report files for LLM analysis. Returns all reports or specific category for detailed analysis of PPA metrics, timing, routing quality, and other design results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYesProject ID from OpenLane run
report_typeNoSpecific report category to read (synthesis, placement, routing, final, etc.). Leave empty to read all reports.

Implementation Reference

  • The core handler function `readOpenlaneReports` in the EDAServer class. It locates the latest OpenLane run, reads key report files (synthesis stats, timing reports, final summary), extracts PPA metrics (cell count, timing slack), determines design status, and returns a structured JSON with reports excerpts and analysis summary.
    async readOpenlaneReports(projectId: string, reportType?: string): Promise<string> {
      try {
        const project = this.projects.get(projectId);
        if (!project) {
          return JSON.stringify({
            success: false,
            error: `Project ${projectId} not found.`,
          }, null, 2);
        }
    
        const runsDir = join(project.dir, "runs");
        let latestRun = "";
        
        try {
          const runs = await fs.readdir(runsDir);
          if (runs.length === 0) {
            return JSON.stringify({
              success: false,
              error: "No OpenLane runs found. Run OpenLane flow first.",
            }, null, 2);
          }
          latestRun = runs.sort().reverse()[0];
        } catch {
          return JSON.stringify({
            success: false,
            error: "No runs directory found. Run OpenLane flow first.",
          }, null, 2);
        }
    
        const reportsDir = join(project.dir, "runs", latestRun, "reports");
        const finalDir = join(project.dir, "runs", latestRun, "final");
        
        // Simple results object
        const results: any = {
          project_id: projectId,
          run_id: latestRun,
          success: true,
          ppa_metrics: {
            power_mw: null,
            max_frequency_mhz: null,
            total_cells: null,
            logic_area_um2: null,
            timing_slack_ns: null
          },
          design_status: {
            synthesis_complete: false,
            timing_clean: false,
            routing_complete: false
          },
          reports: {}
        };
    
        // Helper to safely read file
        const readFile = async (path: string) => {
          try {
            return await fs.readFile(path, 'utf8');
          } catch {
            return null;
          }
        };
    
        // Read synthesis report
        const synthReport = await readFile(join(reportsDir, "synthesis", "1-synthesis.stat.rpt"));
        if (synthReport) {
          results.design_status.synthesis_complete = true;
          results.reports.synthesis = synthReport.substring(0, 2000);
          
          const cellMatch = synthReport.match(/Number of cells:\s*(\d+)/);
          if (cellMatch) {
            results.ppa_metrics.total_cells = parseInt(cellMatch[1]);
          }
        }
    
        // Read timing report
        try {
          const routingDir = join(reportsDir, "routing");
          const files = await fs.readdir(routingDir);
          
          for (const file of files) {
            if (file.includes('sta') || file.includes('timing')) {
              const timingReport = await readFile(join(routingDir, file));
              if (timingReport) {
                results.reports.timing = timingReport.substring(0, 2000);
                
                const wnsMatch = timingReport.match(/WNS.*?(-?\d+\.?\d*)/i);
                if (wnsMatch) {
                  const wns = parseFloat(wnsMatch[1]);
                  results.ppa_metrics.timing_slack_ns = wns;
                  results.design_status.timing_clean = wns >= 0;
                }
                break;
              }
            }
          }
        } catch {
          // Timing reports not available
        }
    
        // Read final summary if available
        const finalSummary = await readFile(join(finalDir, "final.summary.rpt"));
        if (finalSummary) {
          results.reports.final_summary = finalSummary.substring(0, 3000);
          results.design_status.routing_complete = true;
        }
    
        // Add analysis summary
        const issues = [];
        if (!results.design_status.synthesis_complete) issues.push("Synthesis incomplete");
        if (!results.design_status.timing_clean) issues.push("Timing violations detected");
        if (!results.design_status.routing_complete) issues.push("Routing incomplete");
    
        results.summary = {
          status: issues.length === 0 ? "SUCCESS" : "ISSUES_FOUND",
          issues: issues,
          note: "PPA metrics and design status extracted from OpenLane reports"
        };
    
        return JSON.stringify(results, null, 2);
    
      } catch (error: any) {
        return JSON.stringify({
          success: false,
          error: error.message || String(error),
        }, null, 2);
      }
    }
  • JSON Schema defining the input parameters for the tool: required 'project_id' string and optional 'report_type' string.
    inputSchema: {
    type: "object",
    properties: {
        project_id: { 
        type: "string", 
        description: "Project ID from OpenLane run" 
        },
        report_type: { 
        type: "string", 
        description: "Specific report category to read (synthesis, placement, routing, final, etc.). Leave empty to read all reports.",
        default: ""
        },
    },
    required: ["project_id"],
    },
  • src/index.ts:840-857 (registration)
    Tool object registration in the ListTools handler, specifying name, description, and input schema.
        name: "read_openlane_reports",
        description: "Read OpenLane report files for LLM analysis. Returns all reports or specific category for detailed analysis of PPA metrics, timing, routing quality, and other design results.",
        inputSchema: {
        type: "object",
        properties: {
            project_id: { 
            type: "string", 
            description: "Project ID from OpenLane run" 
            },
            report_type: { 
            type: "string", 
            description: "Specific report category to read (synthesis, placement, routing, final, etc.). Leave empty to read all reports.",
            default: ""
            },
        },
        required: ["project_id"],
        },
    },
  • src/index.ts:929-939 (registration)
    Switch case in CallToolRequest handler that validates arguments and delegates to edaServer.readOpenlaneReports.
    case "read_openlane_reports": {
        const projectId = validateRequiredString(args, "project_id", name);
        const reportType = getStringProperty(args, "report_type", "");
        
        return {
        content: [{
            type: "text",
            text: await edaServer.readOpenlaneReports(projectId, reportType || undefined),
        }],
        };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool 'Returns all reports or specific category,' indicating a read-only operation, but doesn't disclose behavioral traits such as error handling, performance characteristics, or whether it requires specific permissions or has rate limits. This is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with two sentences that efficiently convey the tool's purpose and scope. The first sentence states the core function, and the second adds context about the analysis. There's no wasted text, though it could be slightly more structured for clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is moderately complete for a read tool. It covers the purpose and general usage but lacks details on return values, error cases, or behavioral constraints. For a tool with 2 parameters and 100% schema coverage, it's adequate but has clear gaps in transparency and output explanation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('project_id' and 'report_type') with clear descriptions. The description adds minimal value beyond the schema by mentioning 'specific category' and 'detailed analysis,' but doesn't provide additional syntax, format details, or examples. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Read OpenLane report files for LLM analysis.' It specifies the verb ('Read') and resource ('OpenLane report files'), and mentions the analysis context ('for LLM analysis'). However, it doesn't explicitly differentiate from sibling tools like 'run_openlane' or 'view_gds', which might also involve reading or accessing OpenLane data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'detailed analysis of PPA metrics, timing, routing quality, and other design results,' suggesting it's for post-run analysis. However, it lacks explicit guidance on when to use this tool versus alternatives like 'run_openlane' (for execution) or 'view_gds' (for visual inspection), and doesn't specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NellyW8/MCP4EDA'

If you have feedback or need assistance with the MCP directory API, please join our Discord server