Skip to main content
Glama
debugg-ai

Debugg AI MCP

Official
by debugg-ai

Create Test Case

create_test_case

Create a test case with agent task description and assign it to a test suite, returning its UUID and details.

Instructions

Create an individual test case and assign it to a test suite. The test is NOT automatically executed. Requires name, description, agentTaskDescription (the AI agent's goal), and suite + project identifiers. Optional: relativeUrl (must start with "/") and maxSteps (1-100). Returns {uuid, name, description, agentTaskDescription, suite, project, runCount}.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYesTest case name. Required.
descriptionYesTest case description. Required.
agentTaskDescriptionYesNatural language description of what the AI agent should do and verify. Required.
suiteUuidNoTest suite UUID. Provide suiteUuid OR (suiteName + project identifier).
suiteNameNoTest suite name (case-insensitive exact match). Requires projectUuid or projectName.
projectUuidNoProject UUID. Provide projectUuid OR projectName.
projectNameNoProject name (case-insensitive exact match). Provide projectUuid OR projectName.
relativeUrlNoOptional starting URL path relative to the app root, e.g. "/login". Must start with "/".
maxStepsNoMaximum agent steps (1-100).

Implementation Reference

  • Main handler for the create_test_case tool. Resolves project/test suite UUIDs (by name if needed), then calls the API client to create the test case. Returns the created test case JSON on success, or an error response.
    export async function createTestCaseHandler(
      input: CreateTestCaseInput,
      _context: ToolContext,
    ): Promise<ToolResponse> {
      const start = Date.now();
      logger.toolStart('create_test_case', input);
      try {
        const client = new DebuggAIServerClient(config.api.key);
        await client.init();
    
        let projectUuid = input.projectUuid;
        if (!projectUuid) {
          const resolved = await resolveProject(client, input.projectName!);
          if ('error' in resolved) return errorResp(resolved.error, resolved.message, { candidates: (resolved as any).candidates });
          projectUuid = resolved.uuid;
        }
    
        let suiteUuid = input.suiteUuid;
        if (!suiteUuid) {
          const resolved = await resolveTestSuite(client, input.suiteName!, projectUuid);
          if ('error' in resolved) return errorResp(resolved.error, resolved.message, { candidates: (resolved as any).candidates });
          suiteUuid = resolved.uuid;
        }
    
        const testCase = await client.createTestCase({
          name: input.name,
          description: input.description,
          agentTaskDescription: input.agentTaskDescription,
          suiteUuid,
          projectUuid,
          relativeUrl: input.relativeUrl,
          maxSteps: input.maxSteps,
        });
    
        logger.toolComplete('create_test_case', Date.now() - start);
        return { content: [{ type: 'text', text: JSON.stringify(testCase, null, 2) }] };
      } catch (error) {
        logger.toolError('create_test_case', error as Error, Date.now() - start);
        throw handleExternalServiceError(error, 'DebuggAI', 'create_test_case');
      }
    }
  • Zod schema and TypeScript type (CreateTestCaseInput) defining the input shape: name, description, agentTaskDescription, suiteUuid/suiteName, projectUuid/projectName, relativeUrl (must start with /), maxSteps (1-100).
    export const CreateTestCaseInputSchema = z.object({
      name: z.string().min(1),
      description: z.string().min(1),
      agentTaskDescription: z.string().min(1),
      ...suiteIdentifier,
      ...projectIdentifier,
      relativeUrl: z.string().regex(/^\//, 'Must start with /').optional(),
      maxSteps: z.number().int().min(1).max(100).optional(),
    }).strict();
    
    export type CreateTestCaseInput = z.infer<typeof CreateTestCaseInputSchema>;
  • buildCreateTestCaseTool() builds the Tool metadata (name 'create_test_case', title, description, inputSchema) and buildValidatedCreateTestCaseTool() pairs it with the Zod schema and handler.
    export function buildCreateTestCaseTool(): Tool {
      return {
        name: 'create_test_case',
        title: 'Create Test Case',
        description: 'Create an individual test case and assign it to a test suite. The test is NOT automatically executed. Requires name, description, agentTaskDescription (the AI agent\'s goal), and suite + project identifiers. Optional: relativeUrl (must start with "/") and maxSteps (1-100). Returns {uuid, name, description, agentTaskDescription, suite, project, runCount}.',
        inputSchema: {
          type: 'object',
          properties: {
            name: { type: 'string', description: 'Test case name. Required.', minLength: 1 },
            description: { type: 'string', description: 'Test case description. Required.', minLength: 1 },
            agentTaskDescription: { type: 'string', description: 'Natural language description of what the AI agent should do and verify. Required.', minLength: 1 },
            ...SUITE_PROPS,
            ...PROJECT_PROPS,
            relativeUrl: { type: 'string', description: 'Optional starting URL path relative to the app root, e.g. "/login". Must start with "/".' },
            maxSteps: { type: 'number', description: 'Maximum agent steps (1-100).', minimum: 1, maximum: 100 },
          },
          required: ['name', 'description', 'agentTaskDescription'],
          additionalProperties: false,
        },
      };
    }
    
    export function buildValidatedCreateTestCaseTool(): ValidatedTool {
      return { ...buildCreateTestCaseTool(), inputSchema: CreateTestCaseInputSchema, handler: createTestCaseHandler };
    }
  • tools/index.ts:51-77 (registration)
    Tool registered in the master tool list (line 51) and validated tool list (line 73) in initTools().
      buildCreateTestCaseTool(),
      buildUpdateTestCaseTool(),
      buildDeleteTestCaseTool(),
      buildRunTestSuiteTool(),
      buildGetTestSuiteResultsTool(),
    ];
    const validated: ValidatedTool[] = [
      buildValidatedTestPageChangesTool(ctx),
      buildValidatedTriggerCrawlTool(ctx),
      buildValidatedProbePageTool(),
      buildValidatedSearchProjectsTool(),
      buildValidatedSearchEnvironmentsTool(),
      buildValidatedCreateEnvironmentTool(),
      buildValidatedUpdateEnvironmentTool(),
      buildValidatedDeleteEnvironmentTool(),
      buildValidatedUpdateProjectTool(),
      buildValidatedDeleteProjectTool(),
      buildValidatedSearchExecutionsTool(),
      buildValidatedCreateProjectTool(),
      buildValidatedCreateTestSuiteTool(),
      buildValidatedSearchTestSuitesTool(),
      buildValidatedDeleteTestSuiteTool(),
      buildValidatedCreateTestCaseTool(),
      buildValidatedUpdateTestCaseTool(),
      buildValidatedDeleteTestCaseTool(),
      buildValidatedRunTestSuiteTool(),
      buildValidatedGetTestSuiteResultsTool(),
  • DebuggAIServerClient.createTestCase() — the actual API call to POST api/v1/e2e-tests/. Converts camelCase fields to snake_case for the API and maps the response back.
    public async createTestCase(input: {
      name: string;
      description: string;
      agentTaskDescription: string;
      suiteUuid: string;
      projectUuid: string;
      relativeUrl?: string;
      maxSteps?: number;
    }): Promise<{ uuid: string; name: string; description: string; agentTaskDescription: string; suite: string; project: string; runCount: number }> {
      if (!this.tx) throw new Error('Client not initialized — call init() first');
      const body: Record<string, any> = {
        name: input.name,
        description: input.description,
        agent_task_description: input.agentTaskDescription,
        suite: input.suiteUuid,
        project: input.projectUuid,
        run: false,
      };
      if (input.relativeUrl) body.relative_url = input.relativeUrl;
      if (input.maxSteps) body.max_steps = input.maxSteps;
      const t = await this.tx.post<any>('api/v1/e2e-tests/', body);
      return {
        uuid: t.uuid,
        name: t.name,
        description: t.description,
        agentTaskDescription: t.agentTaskDescription ?? t.agent_task_description ?? '',
        suite: t.suite ?? input.suiteUuid,
        project: t.project ?? input.projectUuid,
        runCount: t.runCount ?? t.run_count ?? 0,
      };
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It discloses that the test is NOT automatically executed and provides the return fields. It does not mention authorization or side effects, but is generally transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (4 sentences) and front-loaded with purpose and key behavior. Every sentence adds meaningful information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 9 parameters, no output schema, and no annotations, the description provides return format, explains parameter groups, and notes non-execution. It lacks error explanations but is largely complete for a creation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters have schema descriptions (100% coverage), so baseline is 3. The description adds context, like explaining agentTaskDescription as 'the AI agent's goal' and reinforcing constraints on relativeUrl and maxSteps, adding value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates an individual test case and assigns it to a test suite. It distinguishes from sibling tools like update_test_case and create_test_suite by specifying 'create' and 'assign to suite'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says the test is not automatically executed, which is a key usage guideline. It lists required and optional parameters, but does not explicitly compare to alternatives like update_test_case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/debugg-ai/debugg-ai-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server