Skip to main content
Glama
debugg-ai

Debugg AI MCP

Official
by debugg-ai

Get Test Suite Results

get_test_suite_results

Fetch test suite results including per-test outcomes, pass rate, and run status. Specify suite via UUID or name with project identifier.

Instructions

Fetch a test suite with full per-test results. Returns suite-level status (NEVER_RUN, PENDING, RUNNING, COMPLETED, ERROR), pass rate, last run timestamp, and per-test outcomes (PASS, FAIL, ERROR, TIMEOUT, etc.) with execution times. Accepts suiteUuid directly or suiteName + project identifier.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
suiteUuidNoTest suite UUID. Provide suiteUuid OR (suiteName + project identifier).
suiteNameNoTest suite name (case-insensitive exact match). Requires projectUuid or projectName.
projectUuidNoProject UUID. Provide projectUuid OR projectName.
projectNameNoProject name (case-insensitive exact match). Provide projectUuid OR projectName.

Implementation Reference

  • Main handler for get_test_suite_results tool. Resolves suiteUuid (directly or via suiteName+project), then fetches and returns full test suite detail including per-test outcomes.
    export async function getTestSuiteResultsHandler(
      input: GetTestSuiteResultsInput,
      _context: ToolContext,
    ): Promise<ToolResponse> {
      const start = Date.now();
      logger.toolStart('get_test_suite_results', input);
      try {
        const client = new DebuggAIServerClient(config.api.key);
        await client.init();
    
        let suiteUuid = input.suiteUuid;
        if (!suiteUuid) {
          let projectUuid = input.projectUuid;
          if (!projectUuid) {
            const resolved = await resolveProject(client, input.projectName!);
            if ('error' in resolved) return errorResp(resolved.error, resolved.message, { candidates: (resolved as any).candidates });
            projectUuid = resolved.uuid;
          }
          const resolved = await resolveTestSuite(client, input.suiteName!, projectUuid);
          if ('error' in resolved) return errorResp(resolved.error, resolved.message, { candidates: (resolved as any).candidates });
          suiteUuid = resolved.uuid;
        }
    
        const detail = await client.getTestSuiteDetail(suiteUuid);
        logger.toolComplete('get_test_suite_results', Date.now() - start);
        return { content: [{ type: 'text', text: JSON.stringify(detail, null, 2) }] };
      } catch (error) {
        logger.toolError('get_test_suite_results', error as Error, Date.now() - start);
        throw handleExternalServiceError(error, 'DebuggAI', 'get_test_suite_results');
      }
    }
  • Zod schema (GetTestSuiteResultsInputSchema) defining input: optional suiteUuid or suiteName + projectUuid/projectName. Type inferred as GetTestSuiteResultsInput.
    export const GetTestSuiteResultsInputSchema = z.object({
      ...suiteIdentifier,
      ...projectIdentifier,
    }).strict();
    
    export type GetTestSuiteResultsInput = z.infer<typeof GetTestSuiteResultsInputSchema>;
  • buildGetTestSuiteResultsTool() defines the tool name, title, description, and inputSchema. buildValidatedGetTestSuiteResultsTool() wires schema + handler together.
    export function buildGetTestSuiteResultsTool(): Tool {
      return {
        name: 'get_test_suite_results',
        title: 'Get Test Suite Results',
        description: 'Fetch a test suite with full per-test results. Returns suite-level status (NEVER_RUN, PENDING, RUNNING, COMPLETED, ERROR), pass rate, last run timestamp, and per-test outcomes (PASS, FAIL, ERROR, TIMEOUT, etc.) with execution times. Accepts suiteUuid directly or suiteName + project identifier.',
        inputSchema: {
          type: 'object',
          properties: {
            ...SUITE_PROPS,
            ...PROJECT_PROPS,
          },
          additionalProperties: false,
        },
      };
    }
    
    export function buildValidatedGetTestSuiteResultsTool(): ValidatedTool {
      return { ...buildGetTestSuiteResultsTool(), inputSchema: GetTestSuiteResultsInputSchema, handler: getTestSuiteResultsHandler };
    }
  • tools/index.ts:34-85 (registration)
    initTools() registers all tools including buildGetTestSuiteResultsTool() and buildValidatedGetTestSuiteResultsTool() into the tool registry.
    export function initTools(ctx: ProjectContext | null): void {
      const tools: Tool[] = [
        buildTestPageChangesTool(ctx),
        buildTriggerCrawlTool(ctx),
        buildProbePageTool(),
        buildSearchProjectsTool(),
        buildSearchEnvironmentsTool(),
        buildCreateEnvironmentTool(),
        buildUpdateEnvironmentTool(),
        buildDeleteEnvironmentTool(),
        buildUpdateProjectTool(),
        buildDeleteProjectTool(),
        buildSearchExecutionsTool(),
        buildCreateProjectTool(),
        buildCreateTestSuiteTool(),
        buildSearchTestSuitesTool(),
        buildDeleteTestSuiteTool(),
        buildCreateTestCaseTool(),
        buildUpdateTestCaseTool(),
        buildDeleteTestCaseTool(),
        buildRunTestSuiteTool(),
        buildGetTestSuiteResultsTool(),
      ];
      const validated: ValidatedTool[] = [
        buildValidatedTestPageChangesTool(ctx),
        buildValidatedTriggerCrawlTool(ctx),
        buildValidatedProbePageTool(),
        buildValidatedSearchProjectsTool(),
        buildValidatedSearchEnvironmentsTool(),
        buildValidatedCreateEnvironmentTool(),
        buildValidatedUpdateEnvironmentTool(),
        buildValidatedDeleteEnvironmentTool(),
        buildValidatedUpdateProjectTool(),
        buildValidatedDeleteProjectTool(),
        buildValidatedSearchExecutionsTool(),
        buildValidatedCreateProjectTool(),
        buildValidatedCreateTestSuiteTool(),
        buildValidatedSearchTestSuitesTool(),
        buildValidatedDeleteTestSuiteTool(),
        buildValidatedCreateTestCaseTool(),
        buildValidatedUpdateTestCaseTool(),
        buildValidatedDeleteTestCaseTool(),
        buildValidatedRunTestSuiteTool(),
        buildValidatedGetTestSuiteResultsTool(),
      ];
    
      _tools = tools;
      _validatedTools = validated;
    
      toolRegistry.clear();
      for (const v of validated) toolRegistry.set(v.name, v);
    }
  • getTestSuiteDetail() method on DebuggAIServerClient — fetches suite detail from API, maps camelCase/snake_case fields, returns full results with per-test data.
    public async getTestSuiteDetail(suiteUuid: string): Promise<{
      uuid: string;
      name: string;
      runStatus: string;
      testsCount: number;
      passRate: number | null;
      lastRunAt: string | null;
      tests: Array<{
        uuid: string;
        name: string;
        runCount: number;
        passedRunsCount: number;
        failedRunsCount: number;
        passRate: number | null;
        lastRun: { uuid: string; status: string; outcome: string; executionTime: number | null; timestamp: string } | null;
      }>;
    }> {
      if (!this.tx) throw new Error('Client not initialized — call init() first');
      const s = await this.tx.get<any>(`api/v1/test-suites/${suiteUuid}/`);
      const tests = s.tests ?? [];
      return {
        uuid: s.uuid,
        name: s.name,
        runStatus: s.runStatus ?? s.run_status ?? 'NEVER_RUN',
        testsCount: tests.length,
        passRate: s.passRate ?? s.pass_rate ?? null,
        lastRunAt: s.lastRunAt ?? s.last_run_at ?? null,
        tests: tests.map((t: any) => {
          // Backend returns cur_run (latest run) per test in the suite detail view
          const lastRun = t.curRun ?? t.cur_run ?? t.lastRun ?? t.last_run ?? null;
          return {
            uuid: t.uuid,
            name: t.name,
            runCount: t.runCount ?? t.run_count ?? 0,
            passedRunsCount: t.passedRunsCount ?? t.passed_runs_count ?? 0,
            failedRunsCount: t.failedRunsCount ?? t.failed_runs_count ?? 0,
            passRate: t.passRate ?? t.pass_rate ?? null,
            lastRun: lastRun ? {
              uuid: lastRun.uuid,
              status: lastRun.status,
              outcome: lastRun.outcome,
              executionTime: lastRun.executionTime ?? lastRun.execution_time ?? null,
              timestamp: lastRun.timestamp,
            } : null,
          };
        }),
      };
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes return data (status, pass rate, timestamps, per-test outcomes) but does not mention read-only nature, permissions, error behavior, or rate limits. Since no annotations, description carries full burden, leaving gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. First sentence defines purpose and output, second explains parameter options. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers return fields for a fetch tool without output schema. Missing details on error handling or limits, but sufficient for standard use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already covers 100% parameters with descriptions. Description adds value by summarizing the logical grouping (direct UUID vs name+project) and case-insensitivity rules, clarifying usage constraints beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Fetch a test suite with full per-test results', with specific verbs and resource, and lists return fields. Distinguishes from sibling tools like run_test_suite and search_test_suites.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear parameter combination options (suiteUuid or suiteName+project), but lacks explicit when-to-use or when-not-to use compared to siblings. No exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/debugg-ai/debugg-ai-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server