Skip to main content
Glama
Perufitlife

supabase-security-mcp

by Perufitlife

list_findings

Retrieve findings from the last security audit of a Supabase project. Filter by severity to inspect critical, high, medium, low, or info issues.

Instructions

List findings from the last audit of a project, optionally filtered by severity. Use after audit_project to inspect specific issues.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_refYes
severityNo

Implementation Reference

  • The handler function for the list_findings tool. It retrieves findings from the in-memory cache (populated by the audit_project tool), optionally filters by severity, and returns a formatted list with index numbers, severity, title, and target.
    server.registerTool(
      "list_findings",
      {
        description: "List findings from the last audit of a project, optionally filtered by severity. Use after audit_project to inspect specific issues.",
        inputSchema: {
          project_ref: z.string(),
          severity: z.enum(["critical", "high", "medium", "low", "info"]).optional(),
        },
      },
      async ({ project_ref, severity }) => {
        const c = cache.get(project_ref);
        if (!c) return { content: [{ type: "text", text: `No cached audit for ${project_ref}. Run audit_project first.` }], isError: true };
        const filtered = severity ? c.result.findings.filter((f) => f.severity === severity) : c.result.findings;
        return {
          content: [
            { type: "text", text: `${filtered.length} finding(s)${severity ? ` at severity=${severity}` : ""}:` },
            { type: "text", text: filtered.map((f, i) => `[${i}] ${f.severity.toUpperCase()} — ${f.title} — target: ${f.target}`).join("\n") || "(none)" },
          ],
        };
      }
  • Input schema for list_findings. Accepts project_ref (required string) and severity (optional enum: critical/high/medium/low/info) for filtering findings.
    inputSchema: {
      project_ref: z.string(),
      severity: z.enum(["critical", "high", "medium", "low", "info"]).optional(),
    },
  • src/server.js:65-85 (registration)
    Registration of the list_findings tool with the MCP server via server.registerTool().
    server.registerTool(
      "list_findings",
      {
        description: "List findings from the last audit of a project, optionally filtered by severity. Use after audit_project to inspect specific issues.",
        inputSchema: {
          project_ref: z.string(),
          severity: z.enum(["critical", "high", "medium", "low", "info"]).optional(),
        },
      },
      async ({ project_ref, severity }) => {
        const c = cache.get(project_ref);
        if (!c) return { content: [{ type: "text", text: `No cached audit for ${project_ref}. Run audit_project first.` }], isError: true };
        const filtered = severity ? c.result.findings.filter((f) => f.severity === severity) : c.result.findings;
        return {
          content: [
            { type: "text", text: `${filtered.length} finding(s)${severity ? ` at severity=${severity}` : ""}:` },
            { type: "text", text: filtered.map((f, i) => `[${i}] ${f.severity.toUpperCase()} — ${f.title} — target: ${f.target}`).join("\n") || "(none)" },
          ],
        };
      }
    );
  • In-memory cache map that stores audit results per project_ref, enabling list_findings to retrieve findings without re-running the audit.
    const cache = new Map(); // ref -> { result, ts }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the behavioral disclosure burden. It conveys that only findings from the 'last audit' are listed, implying a session context. However, it does not describe behavior in edge cases (e.g., no prior audit, multiple audits, result format, pagination, or idempotency). The basic intent is clear but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first sentence defines the operation, second sentence provides usage guidance. No unnecessary words or repetition. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with 2 parameters and no output schema, the description covers the basic purpose and usage context. However, it omits details about the expected output (list of findings? fields?), error conditions (no audit found), and what constitutes 'last audit'. It is adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds that 'severity' is an optional filter, which clarifies its purpose. However, 'project_ref' (the required parameter) is not explained; the description only mentions 'a project' without specifying what format or identifier is expected. Partial but insufficient compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List findings') and the resource ('from the last audit of a project'), with an optional filter on severity. This distinguishes it from sibling tools like 'audit_project' (which creates the audit) and 'apply_fix' (which applies fixes), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs to use this tool after 'audit_project' to inspect specific issues. This provides clear sequencing and context, helping the agent decide when to invoke this tool over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Perufitlife/supabase-security-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server