Skip to main content
Glama

open_dashboard

Launch a visual dashboard to review CVE details, severity charts, and upgrade guides for npm, Python, Go, and Rust projects before applying security fixes.

Instructions

Launch the osv-ui visual dashboard in the browser for human review. This is the HUMAN-IN-THE-LOOP step — always offer this before applying fixes. The dashboard shows full CVE details, severity charts, and the upgrade guide. Returns the dashboard URL. If already running for this path, returns existing URL.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathNoPath to the project directory to display in the dashboard.
portNoPort to run the dashboard on. Default: auto-assigned starting from 2003.

Implementation Reference

  • The handleOpenDashboard function manages the launching of the osv-ui browser dashboard, including port assignment and opening the URL in the browser.
    async function handleOpenDashboard({ path: dir = '.', port }) {
      const absDir = resolve(dir);
    
      // Already running?
      const existing = runningDashboards.get(absDir);
      if (existing) {
        const url = `http://localhost:${existing.port}`;
        try { const { default: open } = await import('open'); await open(url); } catch {}
        return ok(`Dashboard already running at ${url}\n\nThe osv-ui dashboard is now open in your browser. Review the vulnerabilities and upgrade guide, then come back and tell me which packages you want to fix.`);
      }
    
      const assignedPort = port || nextPort++;
      const osvUiBin = findOsvUiBin();
    
      if (!osvUiBin) {
        return err(
          'osv-ui CLI not found. Install it first:\n\n  npm install -g osv-ui\n\nThen retry open_dashboard.'
        );
      }
    
      // Spawn osv-ui detached with discovery enabled
      const child = spawn(
        process.execPath,
        [osvUiBin, absDir, `--port=${assignedPort}`, '--no-open'],
        { detached: true, stdio: 'ignore' }
      );
      child.unref();
      runningDashboards.set(absDir, { port: assignedPort, pid: child.pid });
    
      // Wait for server to be ready
      await waitForPort(assignedPort, 8000);
    
      const url = `http://localhost:${assignedPort}`;
  • The schema definition for the 'open_dashboard' tool, detailing input parameters such as 'path' and 'port'.
      name: 'open_dashboard',
      description:
        'Launch the osv-ui visual dashboard in the browser for human review. ' +
        'This is the HUMAN-IN-THE-LOOP step — always offer this before applying fixes. ' +
        'The dashboard shows full CVE details, severity charts, and the upgrade guide. ' +
        'Returns the dashboard URL. If already running for this path, returns existing URL.',
      inputSchema: {
        type: 'object',
        properties: {
          path: {
            type: 'string',
            description: 'Path to the project directory to display in the dashboard.',
          },
          port: {
            type: 'number',
            description: 'Port to run the dashboard on. Default: auto-assigned starting from 2003.',
          },
        },
      },
    },
  • The request handler registration where 'open_dashboard' is mapped to the 'handleOpenDashboard' function.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
      const { name, arguments: args } = request.params;
    
      try {
        switch (name) {
          case 'scan_project': return await handleScanProject(args);
          case 'open_dashboard': return await handleOpenDashboard(args);
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and adds valuable behavioral context: it discloses that this launches a browser dashboard for human review, returns a URL, and handles existing instances by returning the existing URL if already running. However, it doesn't mention potential side effects like opening browser tabs or resource usage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by workflow guidance and behavioral details, all in three efficient sentences with zero wasted words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well by explaining the tool's purpose, usage context, and return value (URL). However, it lacks details on error handling or what happens if the path is invalid, leaving minor gaps in completeness for a tool with two parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (path and port). The description adds no additional parameter semantics beyond what's in the schema, such as format examples or constraints, meeting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Launch the osv-ui visual dashboard in the browser') and resource ('for human review'), distinguishing it from siblings like apply_fixes, get_fix_commands, and scan_project by emphasizing it's the 'HUMAN-IN-THE-LOOP step' for review before applying fixes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use this tool ('always offer this before applying fixes') and distinguishes it from alternatives by positioning it as a prerequisite step to apply_fixes, with clear context about its role in the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/toan203/osv-ui'

If you have feedback or need assistance with the MCP directory API, please join our Discord server