Skip to main content
Glama

get_suggested_questions

Read-onlyIdempotent

Generates prioritized review questions from cached analyses to highlight key areas during PR review, with severity levels and follow-up tool suggestions.

Instructions

Auto-generated, prioritized review questions derived from the analyses we already cache (untested framework entry points, circular imports, ast-clone clusters, dead-export drift, untested-but-exported symbols). Use during PR review to surface "what should I be looking at?" without manually chaining six tools. Each question carries a severity (high/medium/low) and the follow-up tool to drill in. Read-only. Returns JSON: { questions: [{ id, severity, question, reason, follow_up }], total, generated_at }.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Main handler function `getSuggestedQuestions` that generates prioritized review questions from cached analyses (untested framework entry points, circular imports, AST clone clusters, dead exports, untested symbols). Queries the SQLite store for framework symbols, checks for test file partners, and compiles a ranked list of questions with severity and follow-up tool info.
    export function getSuggestedQuestions(store: Store): SuggestedQuestionsResult {
      const questions: SuggestedQuestion[] = [];
    
      // ── 1. Framework entry points without an obvious test partner ────────────
      // Controllers/services/repositories carry the same false-positive risk as
      // dead-code analysis: they're framework-managed entry points. If there's
      // no matching test file, the reviewer should know.
      const fwSymbols = store.db
        .prepare(`
        SELECT s.symbol_id, s.name, f.path AS file_path, s.metadata
        FROM symbols s
        JOIN files f ON s.file_id = f.id
        WHERE s.metadata IS NOT NULL
          AND (
            json_extract(s.metadata, '$.frameworkRole') IS NOT NULL
            OR json_extract(s.metadata, '$.decorators') IS NOT NULL
            OR json_extract(s.metadata, '$.annotations') IS NOT NULL
          )
          AND s.kind IN ('class', 'function')
        LIMIT 200
      `)
        .all() as SymbolMetaRow[];
    
      let untested = 0;
      for (const row of fwSymbols) {
        const testGlob = guessTestGlobs(row.file_path, row.name);
        const hasTest = testGlob.some((g) => {
          const probe = store.db.prepare('SELECT 1 FROM files WHERE path GLOB ? LIMIT 1').get(g) as
            | { 1: number }
            | undefined;
          return Boolean(probe);
        });
        if (!hasTest) {
          untested++;
          if (untested <= 3) {
            questions.push({
              id: 'untested_framework_entry_point',
              severity: 'high',
              question: `Is "${row.name}" exercised by an integration or unit test?`,
              reason: `${row.file_path} declares a framework entry point (controller/service/handler) with no obvious test file partner.`,
              follow_up: { tool: 'get_tests_for', args: { symbol_id: row.symbol_id } },
            });
          }
        }
      }
      if (untested > 3) {
        questions.push({
          id: 'untested_framework_entry_point_summary',
          severity: 'medium',
          question: `Should the team triage the ${untested - 3} additional untested framework entry points?`,
          reason: `${untested} entry points lack an obvious test file. Showing the first 3 above.`,
          follow_up: { tool: 'get_untested_exports', args: {} },
        });
      }
    
      // ── 2. Circular imports — defer to the on-demand tool ───────────────────
      // We don't cache cycles in the DB. The question still belongs in the
      // canned list because cycle hunting is a recurring review task.
      questions.push({
        id: 'circular_imports',
        severity: 'medium',
        question: 'Are there any circular import chains in the changed surface?',
        reason:
          'Circular imports inflate cold-start time and break tree-shaking; if accidental they should be broken with an interface or DI.',
        follow_up: { tool: 'get_circular_imports' },
      });
    
      // ── 3. Symbol duplication clusters — defer to detect_ast_clones ─────────
      // ast clones aren't cached in a table; suggest running the tool.
      questions.push({
        id: 'ast_clone_cluster',
        severity: 'medium',
        question: 'Have any structural clones (Type-2) appeared on this branch?',
        reason:
          'Type-2 clones share an AST shape after identifier/literal normalisation — the prime DRY-refactor candidates.',
        follow_up: { tool: 'detect_ast_clones' },
      });
    
      // ── 4. High-confidence dead exports (post-framework-aware filter) ───────
      // Use the same JSON-extract trick as the dead-code module to count
      // unreferenced exports. Safe even if the dead_code_v2 cache isn't built.
      const exportedCount = store.db
        .prepare(`
        SELECT COUNT(*) AS cnt
        FROM symbols s
        WHERE json_extract(s.metadata, '$.exported') = 1
          AND s.kind != 'method'
      `)
        .get() as { cnt: number };
      if (exportedCount.cnt > 50) {
        questions.push({
          id: 'dead_export_audit',
          severity: 'low',
          question: `Are all ${exportedCount.cnt} exports actually consumed, or has the public surface drifted?`,
          reason:
            'Public APIs accrete over time; a periodic dead-export audit catches code that should have been deleted in a prior PR.',
          follow_up: { tool: 'get_dead_exports' },
        });
      }
    
      // ── 5. Untested-but-exported symbols ────────────────────────────────────
      // Different signal from #1: this catches plain exports without test
      // coverage, not specifically framework entry points.
      // (Cheaper than running get_untested_symbols inline; we just ask the
      // question if the project has any test files at all.)
      const hasTests = store.db
        .prepare("SELECT 1 FROM files WHERE path LIKE '%.test.%' OR path LIKE '%/__tests__/%' LIMIT 1")
        .get();
      if (hasTests) {
        questions.push({
          id: 'untested_symbols',
          severity: 'medium',
          question: 'Which exported symbols have no test coverage at all (vs imported-but-not-called)?',
          reason:
            'get_untested_symbols classifies "unreached" vs "imported_not_called" — the unreached set is the highest-leverage place to add tests.',
          follow_up: { tool: 'get_untested_symbols' },
        });
      }
    
      // Sort: severity desc, then by id for stable output.
      const severityRank = { high: 0, medium: 1, low: 2 } as const;
      questions.sort((a, b) => severityRank[a.severity] - severityRank[b.severity]);
    
      return {
        questions: questions.slice(0, QUESTION_LIMIT),
        total: questions.length,
        generated_at: new Date().toISOString(),
      };
    }
  • Type definitions for `SuggestedQuestion` and `SuggestedQuestionsResult` interfaces. Each question has id, severity (high/medium/low), question text, reason, and follow_up tool reference with optional args.
    export interface SuggestedQuestion {
      /** Stable identifier for the question template — useful for filtering. */
      id: string;
      /** Severity bucket. high = blocking before merge, medium = should review,
       * low = note for follow-up. */
      severity: 'high' | 'medium' | 'low';
      /** Short, single-sentence question phrased for a reviewer. */
      question: string;
      /** Why this question was generated — names the symbol/file/metric. */
      reason: string;
      /** Tool the reviewer should run to answer it. */
      follow_up: { tool: string; args?: Record<string, unknown> };
    }
    
    export interface SuggestedQuestionsResult {
      questions: SuggestedQuestion[];
      total: number;
      generated_at: string;
    }
  • MCP tool registration of 'get_suggested_questions' using `server.tool()`. No input params (empty schema), dynamically imports the handler from suggested-questions.js, and returns JSON result.
    // --- Suggested Review Questions ---
    
    server.tool(
      'get_suggested_questions',
      'Auto-generated, prioritized review questions derived from the analyses we already cache (untested framework entry points, circular imports, ast-clone clusters, dead-export drift, untested-but-exported symbols). Use during PR review to surface "what should I be looking at?" without manually chaining six tools. Each question carries a severity (high/medium/low) and the follow-up tool to drill in. Read-only. Returns JSON: { questions: [{ id, severity, question, reason, follow_up }], total, generated_at }.',
      {},
      async () => {
        const { getSuggestedQuestions } = await import('../quality/suggested-questions.js');
        const result = getSuggestedQuestions(store);
        return { content: [{ type: 'text', text: j(result) }] };
      },
    );
  • Helper function `guessTestGlobs` that generates candidate test file paths from a source file path (e.g., `.test.ts`, `.spec.ts`, `__tests__/` patterns) to probe whether a test partner exists.
    function guessTestGlobs(filePath: string, _symbolName: string): string[] {
      const segments = filePath.split('/');
      const basename = segments[segments.length - 1];
      const stem = basename.replace(/\.[^.]+$/, '');
      return [
        filePath.replace(/\.([jt]sx?|py|java|kt|rb|go|rs)$/, '.test.$1'),
        filePath.replace(/\.([jt]sx?|py|java|kt|rb|go|rs)$/, '.spec.$1'),
        `**/__tests__/**/${stem}*`,
        `tests/**/${stem}*`,
        `**/*${stem}.test.*`,
        `**/*${stem}.spec.*`,
        `**/test_${stem}.py`,
      ];
    }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description discloses read-only nature, output format (JSON with questions, severity, follow_up), and derivation from cached analyses. No contradictions with annotations; adds value beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise (~80 words), front-loaded with key purpose and usage context. Every sentence adds value; no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 0 parameters and no output schema, description fully explains output structure and content. Complete for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters, so description need not add parameter info. Baseline 4 applies; no additional meaning required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states tool auto-generates prioritized review questions from cached analyses, listing specific analysis types. It distinguishes from sibling tools that are individual analyses, serving as an aggregator.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Clearly states use during PR review and that it replaces manually chaining six tools. Implicitly indicates when to use, but no explicit when-not-to-use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nikolai-vysotskyi/trace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server