Skip to main content
Glama
Sicks3c

HackerOne MCP Server

by Sicks3c

search_reports

Search HackerOne vulnerability reports using filters for keywords, programs, severity, or state to find past reports for reference when drafting new ones.

Instructions

Search and list your HackerOne reports. Filter by keyword, program, severity, or state. Great for finding past reports to reference when drafting new ones.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryNoKeyword search (e.g. 'SSRF', 'OAuth', 'PassRole', 'S3')
programNoProgram handle to filter by (e.g. 'uber', 'amazon')
severityNoFilter by severity rating
stateNoFilter by report state
page_sizeNoResults per page (default 25)
page_numberNoPage number for pagination
sortNoSort field (e.g. 'reports.created_at' or '-reports.created_at' for desc)

Implementation Reference

  • The core implementation of `searchReports`, which handles pagination and client-side filtering/sorting as the HackerOne API does not support server-side filtering for reports.
    export async function searchReports(opts: SearchReportsOpts = {}) {
      // The /hackers/me/reports endpoint only supports pagination (page[number], page[size]).
      // Filtering by program, severity, state, keyword must be done client-side.
    
      const needsFilter = !!(opts.program || opts.severity || opts.state || opts.query);
      const requestedSize = opts.page_size ?? 25;
    
      // If filtering, fetch max results to filter from; otherwise respect page_size
      const fetchSize = needsFilter ? 100 : requestedSize;
      const pageNumber = opts.page_number ?? 1;
    
      let allReports: any[] = [];
    
      if (needsFilter) {
        // H1 hacker API doesn't support server-side filtering or sorting.
        // Strategy: find the last page first, then fetch backwards (newest first)
        // so recent reports are found quickly without fetching all 900+ reports.
    
        // Step 1: find total pages by probing
        let lastPage = 1;
        const probeRes = await h1Fetch("/hackers/me/reports", {
          "page[size]": "100",
          "page[number]": "1",
        });
        if (probeRes.data?.length === 100) {
          // Binary search for last page
          let lo = 1, hi = 50;
          while (lo < hi) {
            const mid = Math.ceil((lo + hi) / 2);
            const check = await h1Fetch("/hackers/me/reports", {
              "page[size]": "100",
              "page[number]": String(mid),
            });
            if (check.data?.length > 0) {
              lo = mid;
              if (check.data.length < 100) break; // This is the last page
              hi = Math.max(hi, mid + 5);
            } else {
              hi = mid - 1;
            }
          }
          lastPage = lo;
        }
    
        // Step 2: fetch from last page backwards (newest reports first)
        for (let page = lastPage; page >= 1; page--) {
          const data = page === 1 && probeRes.data
            ? probeRes // reuse first page probe if we loop back to it
            : await h1Fetch("/hackers/me/reports", {
                "page[size]": "100",
                "page[number]": String(page),
              });
          if (!data.data || data.data.length === 0) continue;
          allReports.push(...data.data);
    
          // Early exit: check if we already have enough matches
          const tempFiltered = allReports.filter((r: any) => {
            const prog = r.relationships?.program?.data?.attributes?.handle;
            if (opts.program && prog?.toLowerCase() !== opts.program.toLowerCase()) return false;
            if (opts.severity && r.attributes.severity_rating !== opts.severity) return false;
            if (opts.state && r.attributes.state !== opts.state) return false;
            return true;
          });
          if (tempFiltered.length >= requestedSize) break;
        }
      } else {
        const data = await h1Fetch("/hackers/me/reports", {
          "page[size]": String(fetchSize),
          "page[number]": String(pageNumber),
        });
        allReports = data.data ?? [];
      }
    
      // Map to clean objects — keep vulnerability_information for keyword filtering but strip from final output
      let reports = allReports.map((r: any) => ({
        id: r.id,
        title: r.attributes.title,
        state: r.attributes.state,
        substate: r.attributes.substate,
        severity: r.attributes.severity_rating,
        created_at: r.attributes.created_at,
        disclosed_at: r.attributes.disclosed_at,
        bounty_awarded_at: r.attributes.bounty_awarded_at,
        _vuln_info: r.attributes.vulnerability_information,
        weakness: r.relationships?.weakness?.data?.attributes?.name ?? null,
        program:
          r.relationships?.program?.data?.attributes?.handle ?? null,
      }));
    
      // Client-side filtering
      if (opts.program) {
        const prog = opts.program.toLowerCase();
        reports = reports.filter((r) => r.program?.toLowerCase() === prog);
      }
      if (opts.severity) {
        reports = reports.filter((r) => r.severity === opts.severity);
      }
      if (opts.state) {
        reports = reports.filter((r) => r.state === opts.state);
      }
      if (opts.query) {
        const q = opts.query.toLowerCase();
        reports = reports.filter(
          (r) =>
            r.title?.toLowerCase().includes(q) ||
            r._vuln_info?.toLowerCase().includes(q) ||
            r.weakness?.toLowerCase().includes(q)
        );
      }
    
      // Sort if requested
      if (opts.sort) {
        const desc = opts.sort.startsWith("-");
        const field = opts.sort.replace(/^-/, "").replace("reports.", "");
        reports.sort((a: any, b: any) => {
          const va = a[field] ?? "";
          const vb = b[field] ?? "";
          return desc ? (vb > va ? 1 : -1) : (va > vb ? 1 : -1);
        });
      }
    
      // Apply page_size limit to filtered results
      if (needsFilter) {
        reports = reports.slice(0, requestedSize);
      }
    
      // Strip internal _vuln_info from output to keep responses small
      return reports.map(({ _vuln_info, ...rest }) => rest);
    }
  • src/index.ts:23-86 (registration)
    Tool registration for `search_reports` using the MCP server SDK in `src/index.ts`, defining schemas using Zod and invoking the `searchReports` handler.
    server.tool(
      "search_reports",
      "Search and list your HackerOne reports. Filter by keyword, program, severity, or state. Great for finding past reports to reference when drafting new ones.",
      {
        query: z
          .string()
          .optional()
          .describe(
            "Keyword search (e.g. 'SSRF', 'OAuth', 'PassRole', 'S3')"
          ),
        program: z
          .string()
          .optional()
          .describe("Program handle to filter by (e.g. 'uber', 'amazon')"),
        severity: z
          .enum(["none", "low", "medium", "high", "critical"])
          .optional()
          .describe("Filter by severity rating"),
        state: z
          .enum([
            "new",
            "triaged",
            "needs-more-info",
            "resolved",
            "not-applicable",
            "informative",
            "duplicate",
            "spam",
          ])
          .optional()
          .describe("Filter by report state"),
        page_size: z
          .number()
          .min(1)
          .max(100)
          .optional()
          .describe("Results per page (default 25)"),
        page_number: z.number().optional().describe("Page number for pagination"),
        sort: z
          .string()
          .optional()
          .describe(
            "Sort field (e.g. 'reports.created_at' or '-reports.created_at' for desc)"
          ),
      },
      async (params) => {
        try {
          const results = await searchReports(params);
          return {
            content: [
              {
                type: "text" as const,
                text: JSON.stringify(results, null, 2),
              },
            ],
          };
        } catch (err: any) {
          return {
            content: [{ type: "text" as const, text: `Error: ${err.message}` }],
            isError: true,
          };
        }
      }
    );
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it mentions the search/list functionality and filtering capabilities, it doesn't address important behavioral aspects like whether this is a read-only operation, authentication requirements, rate limits, pagination behavior beyond the parameters, or what format the results will be in. The description provides basic functionality but lacks critical operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that each serve a purpose. The first sentence states the core functionality, and the second provides usage context. There's no wasted verbiage, though it could be slightly more structured for optimal front-loading of information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 7-parameter search tool with no annotations and no output schema, the description is insufficiently complete. It doesn't address what the tool returns (report objects? summaries?), how results are structured, pagination behavior beyond the parameters, or error conditions. The usage hint helps but doesn't compensate for the missing behavioral and output context needed for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description mentions filtering by 'keyword, program, severity, or state' which aligns with some parameters but doesn't add meaningful semantic context beyond what's already in the schema. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches and lists HackerOne reports with specific filtering capabilities. It uses specific verbs ('search and list') and identifies the resource ('HackerOne reports'), but doesn't explicitly differentiate from sibling tools like 'get_report' or 'list_programs'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides an implied usage context ('Great for finding past reports to reference when drafting new ones'), which gives some guidance on when to use it. However, it doesn't explicitly state when NOT to use it or mention alternatives among the sibling tools, leaving some ambiguity about tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Sicks3c/hackerone-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server