Skip to main content
Glama

run_automation_rules

Trigger an automation rule on a specific test cycle using the rule key, project ID, and cycle ID. Returns a background task object with taskId and progressUrl to monitor completion.

Instructions

Trigger an automation rule to run against a specific test cycle. testCycleId is the internal id string (from get_test_cycle). Returns a background task object with taskId and progressUrl to poll for completion.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
automationRuleKeyYesAutomation rule key to run
projectIdYesJira project numeric ID (e.g. 10011)
testCycleIdYesInternal test cycle ID (from search_test_cycles)

Implementation Reference

  • The async handler function that executes the 'run_automation_rules' tool. It makes a POST request to /automation-rule/{automationRuleKey}/run with projectId and testCycleId, returning the background task result.
      async ({ automationRuleKey, projectId, testCycleId }) =>
        ok(
          await qtmFetch(`/automation-rule/${automationRuleKey}/run`, {
            method: "POST",
            body: JSON.stringify({ projectId, testCycleId }),
          })
        )
    );
  • The input schema for run_automation_rules: requires automationRuleKey (string), projectId (number), and testCycleId (string).
    {
      automationRuleKey: z.string().describe("Automation rule key to run"),
      projectId: z.number().int().describe("Jira project numeric ID (e.g. 10011)"),
      testCycleId: z.string().describe("Internal test cycle ID (from search_test_cycles)"),
    },
  • src/index.ts:719-734 (registration)
    Tool registration via the 'tool()' wrapper which calls server.registerTool with name 'run_automation_rules', description, inputSchema, and the handler callback.
    tool(
      "run_automation_rules",
      "Trigger an automation rule to run against a specific test cycle. testCycleId is the internal id string (from get_test_cycle). Returns a background task object with taskId and progressUrl to poll for completion.",
      {
        automationRuleKey: z.string().describe("Automation rule key to run"),
        projectId: z.number().int().describe("Jira project numeric ID (e.g. 10011)"),
        testCycleId: z.string().describe("Internal test cycle ID (from search_test_cycles)"),
      },
      async ({ automationRuleKey, projectId, testCycleId }) =>
        ok(
          await qtmFetch(`/automation-rule/${automationRuleKey}/run`, {
            method: "POST",
            body: JSON.stringify({ projectId, testCycleId }),
          })
        )
    );
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must cover behavioral traits. It discloses the async nature by describing the return object (taskId, progressUrl), but does not mention side effects, permissions, or failure modes. This provides moderate transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two concise sentences: one for purpose, one for parameter clarification and return value. No redundancy, perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking output schema, the description explains the return format (background task with taskId and progressUrl) and references the source of testCycleId. It assumes some domain knowledge but covers essential usage points for a 3-required-param tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds marginal value by specifying testCycleId as 'internal id string (from get_test_cycle),' but schema already says 'Internal test cycle ID (from search_test_cycles).' No meaningful addition for other parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Trigger an automation rule to run against a specific test cycle,' specifying the verb (trigger), resource (automation rule), and target (test cycle). It distinguishes from sibling tools like link_automation_rule by indicating execution rather than association.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes that testCycleId is the internal id from get_test_cycle, implying a prerequisite, and mentions the async return result. However, it does not explicitly compare with alternatives like link_automation_rule or state when not to use this tool, leaving usage context incomplete.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/salehrifai42/qmetrymcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server