Skip to main content
Glama

create-test-cycle

Create a test cycle in qTest Test Execution to organize test runs by sprint, release, or regression campaign. Provide project ID and name; optionally specify a parent cycle.

Instructions

Test Execution — create a test cycle (execution folder) in qTest Test Execution to group test runs for a sprint, release, or regression campaign

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYes
nameYesName of the new test cycle
parentIdNoParent test cycle ID; omit to create at root

Implementation Reference

  • src/server.ts:77-96 (registration)
    Registration of the 'create-test-cycle' tool with name, description, input schema, and handler callback
    server.registerTool(
      'create-test-cycle',
      {
        description:
          'Test Execution — create a test cycle (execution folder) in qTest Test Execution to group test runs for a sprint, release, or regression campaign',
        inputSchema: {
          projectId: z.string(),
          name: z.string().describe('Name of the new test cycle'),
          parentId: z
            .number()
            .int()
            .optional()
            .describe('Parent test cycle ID; omit to create at root'),
        },
      },
      async ({ projectId, name, parentId }) => {
        const result = await createExecutionFolder({ projectId, name, parentId })
        return { content: [{ type: 'text' as const, text: JSON.stringify(result, null, 2) }] }
      }
    )
  • Input schema for create-test-cycle: projectId (string), name (string), optional parentId (number)
    {
      description:
        'Test Execution — create a test cycle (execution folder) in qTest Test Execution to group test runs for a sprint, release, or regression campaign',
      inputSchema: {
        projectId: z.string(),
        name: z.string().describe('Name of the new test cycle'),
        parentId: z
          .number()
          .int()
          .optional()
          .describe('Parent test cycle ID; omit to create at root'),
      },
    },
  • Handler callback that calls createExecutionFolder with the provided arguments
    async ({ projectId, name, parentId }) => {
      const result = await createExecutionFolder({ projectId, name, parentId })
      return { content: [{ type: 'text' as const, text: JSON.stringify(result, null, 2) }] }
    }
  • Full implementation file: imports config and qtestFetch, defines CreateExecutionFolderArgs and ExecutionFolder interfaces, and exports the createExecutionFolder function that makes the API call
    import { config } from '@/config.js'
    import { qtestFetch } from '@/client.js'
    
    export interface CreateExecutionFolderArgs {
      projectId: string
      name: string
      parentId?: number
    }
    
    export interface ExecutionFolder {
      id: number
      name: string
      parentId?: number
    }
    
    export async function createExecutionFolder(
      args: CreateExecutionFolderArgs
    ): Promise<ExecutionFolder> {
      const { projectId, name, parentId } = args
      const endpoint =
        parentId !== undefined
          ? `/test-cycles?parentId=${parentId}&parentType=test-cycle`
          : `/test-cycles?parentType=root`
    
      const result = await qtestFetch(config, projectId, endpoint, 'POST', {
        name,
        description: '',
      })
      return result as ExecutionFolder
    }
  • Import statement for createExecutionFolder from the helper implementation file
    import { createExecutionFolder } from '@/tools/test_execution/create_test_cycle.js'
    import { addTestCases } from '@/tools/test_execution/add_test_cases.js'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must cover behavioral traits, but it only provides basic creation purpose. No mention of permissions, side effects, rate limits, or return values. Minimal transparency beyond the action itself.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with clear purpose upfront. No redundant information, and structure is efficient given the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks details on return values (no output schema), error handling, prerequisites (e.g., project existence), and relationships to sibling tools. Incomplete for an agent to fully understand usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 67% (name and parentId have descriptions, projectId does not). The description adds no additional meaning to any parameter, leaving projectId undocumented and not compensating for missing schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it creates a test cycle (execution folder) for grouping test runs, with specific use cases like sprints or releases. It differentiates from siblings like delete-test-cycle or list-test-cycle by focusing on creation and adds context about qTest Test Execution, but does not explicitly contrast with add-test-cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like add-test-cases or create-module. The description implies it is for grouping test runs but lacks when-not or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Usman-Ghani123/qtest-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server