Skip to main content
Glama
TCSoftInc

TestCollab MCP Server

by TCSoftInc

get_test_plan

Fetch a test plan with details like test case count, configurations, runs, and execution progress status using ID or title.

Instructions

Fetch a single test plan with summary details:

  • Included test cases count

  • Test plan configurations

  • Test plan runs

  • Current execution progress status

Required: id or title Optional: project_id, include_configurations, include_runs, runs_limit, runs_offset, runs_sort

Example: { "id": 812, "project_id": 16 }

or

{ "title": "Release 3.0 Regression", "project_id": 16 }

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
idNoTest plan ID to retrieve. Accepts numeric ID or title string.
titleNoTest plan title to retrieve (alternative to id).
project_idNoProject ID (uses default if not specified)
include_configurationsNoInclude test plan configurations in the response (default: true)
include_runsNoInclude test plan runs in the response (default: true)
runs_limitNoMaximum number of runs to return (1-100, default: 20)
runs_offsetNoNumber of runs to skip (default: 0)
runs_sortNoRun sort expression (default: "id:desc")id:desc

Implementation Reference

  • The main handler function for the get_test_plan tool, responsible for parsing arguments, validating project context, fetching the test plan, and constructing the response.
    export async function handleGetTestPlan(
      args: unknown
    ): Promise<{ content: Array<{ type: "text"; text: string }> }> {
      const parsed = getTestPlanSchema.safeParse(args);
      if (!parsed.success) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({
                error: {
                  code: "VALIDATION_ERROR",
                  message: "Invalid input parameters",
                  details: parsed.error.errors,
                },
              }),
            },
          ],
        };
      }
    
      const {
        id,
        title,
        project_id,
        include_configurations,
        include_runs,
        runs_limit,
        runs_offset,
        runs_sort,
      } = parsed.data;
    
      const requestContext = getRequestContext();
      const envConfig = requestContext ? null : getConfig();
      const resolvedProjectId =
        project_id ?? requestContext?.defaultProjectId ?? envConfig?.defaultProjectId;
    
      if (!resolvedProjectId) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({
                error: {
                  code: "MISSING_PROJECT_ID",
                  message:
                    "project_id is required. Either provide it in the request or set TC_DEFAULT_PROJECT environment variable.",
                },
              }),
            },
          ],
        };
      }
    
      try {
        const client = getApiClient();
    
        const resolvedTitleInput = normalizeString(title);
        const idAsNumber = toNumberId(id);
        const idAsTitle =
          typeof id === "string" && toNumberId(id) === undefined
            ? normalizeString(id)
            : undefined;
        const lookupTitle = resolvedTitleInput ?? idAsTitle;
        let resolvedTestPlanId = idAsNumber;
    
        if (resolvedTestPlanId === undefined && lookupTitle) {
          const exactMatchesRaw = await client.listTestPlans({
            projectId: resolvedProjectId,
            limit: 100,
            offset: 0,
            sort: "updated_at:desc",
            filter: { title: lookupTitle },
          });
          let matches = findMatchingPlansByTitle(
            mapPlanLookupCandidates(exactMatchesRaw),
            lookupTitle
          );
    
          if (matches.length === 0) {
            const fallbackMatchesRaw = await client.listTestPlans({
              projectId: resolvedProjectId,
              limit: 100,
              offset: 0,
              sort: "updated_at:desc",
              filter: { title_contains: lookupTitle },
            });
            matches = findMatchingPlansByTitle(
              mapPlanLookupCandidates(fallbackMatchesRaw),
              lookupTitle
            );
          }
    
          if (matches.length === 0) {
            return {
              content: [
                {
                  type: "text",
                  text: JSON.stringify({
                    error: {
                      code: "TEST_PLAN_NOT_FOUND",
                      message: `Test plan not found with title "${lookupTitle}" in that project.`,
                    },
                  }),
                },
              ],
            };
          }
    
          if (matches.length > 1) {
            return {
              content: [
                {
                  type: "text",
                  text: JSON.stringify({
                    error: {
                      code: "AMBIGUOUS_TEST_PLAN_TITLE",
                      message: `Multiple test plans matched title "${lookupTitle}". Provide ID instead.`,
                      details: {
                        matching_ids: matches.map((plan) => plan.id),
                      },
                    },
                  }),
                },
              ],
            };
          }
    
          resolvedTestPlanId = matches[0].id;
        }
    
        if (resolvedTestPlanId === undefined) {
          return {
            content: [
              {
                type: "text",
                text: JSON.stringify({
                  error: {
                    code: "VALIDATION_ERROR",
                    message: "Provide a numeric id or a non-empty title.",
                  },
                }),
              },
            ],
          };
        }
    
        const rawPlan = await client.getTestPlanRaw(resolvedTestPlanId);
        const plan = unwrapApiEntity(rawPlan);
        if (!plan) {
          return {
            content: [
              {
                type: "text",
                text: JSON.stringify({
                  error: {
                    code: "INVALID_TEST_PLAN",
                    message: `Unable to parse test plan ${resolvedTestPlanId}.`,
                  },
                }),
              },
            ],
          };
        }
    
        const includedTestCasesCountResponse = await client.getTestPlanTestCaseCount(
          resolvedProjectId,
          resolvedTestPlanId
        );
        const includedTestCasesCount = extractCount(includedTestCasesCountResponse);
    
        let configurations: Array<Record<string, unknown>> = [];
        if (include_configurations) {
          const rawConfigurations = await client.listTestPlanConfigurations({
            projectId: resolvedProjectId,
            testplan: resolvedTestPlanId,
            limit: -1,
          });
    
          configurations = rawConfigurations
            .map((configuration) => mapConfiguration(configuration))
            .filter(
              (configuration): configuration is Record<string, unknown> =>
                Boolean(configuration)
            );
        }
    
        let runs: Array<Record<string, unknown>> = [];
        if (include_runs) {
          const rawRuns = await client.listTestPlanRegressions({
            projectId: resolvedProjectId,
            testplan: resolvedTestPlanId,
            limit: runs_limit,
            start: runs_offset,
            sort: runs_sort,
          });
    
          runs = rawRuns
            .map((run) => mapRun(run))
            .filter((run): run is Record<string, unknown> => Boolean(run));
        }
    
        let runCount: number | null = null;
        try {
          const runCountResponse = await client.getTestPlanRegressionCount(
            resolvedProjectId,
            { testplan: resolvedTestPlanId }
          );
          runCount = extractCount(runCountResponse);
        } catch {
          runCount = null;
        }
    
        const planResults = getField<unknown>(plan, "results");
        const planOverallSummary = normalizeResultSummary(
          planResults && typeof planResults === "object" && !Array.isArray(planResults)
            ? getField(planResults, "overall") ?? planResults
            : undefined
        );
        const latestRunSummary =
          runs.length > 0
            ? normalizeResultSummary(getField(runs[0], "result"))
            : null;
    
        let progress: ProgressPayload | null = null;
        if (planOverallSummary || latestRunSummary) {
          const summary = planOverallSummary ?? latestRunSummary!;
          const source: ProgressPayload["source"] = planOverallSummary
            ? "test_plan_results"
            : "latest_run_result";
          const total = Object.values(summary).reduce((sum, value) => sum + value, 0);
          const unexecuted = getStatusCount(summary, "unexecuted");
          const passed = getStatusCount(summary, "passed");
          const failed = getStatusCount(summary, "failed");
          const skipped = getStatusCount(summary, "skipped");
          const blocked = getStatusCount(summary, "blocked");
          const executed = Math.max(total - unexecuted, 0);
          progress = {
            source,
            status: deriveProgressStatus({
              total,
              executed,
              failed,
              blocked,
            }),
            total,
            executed,
            unexecuted,
            passed,
            failed,
            skipped,
            blocked,
            executionProgressPercent: toPercent(executed, total),
            passRatePercent: toPercent(passed, executed),
            summary,
          };
        }
    
        const status = toNumberId(getField(plan, "status"));
        const priority = toNumberId(getField(plan, "priority"));
        const testPlanFolderRaw =
          getField(plan, "test_plan_folder") ?? getField(plan, "testPlanFolder");
        const testPlanFolderId = extractId(testPlanFolderRaw);
        const testPlanFolderTitle =
          normalizeString(getField<string>(testPlanFolderRaw, "title")) ??
          normalizeString(getField<string>(testPlanFolderRaw, "name"));
        const releaseRaw = getField(plan, "release");
        const releaseId = extractId(releaseRaw);
        const releaseTitle =
          normalizeString(getField<string>(releaseRaw, "title")) ??
          normalizeString(getField<string>(releaseRaw, "name"));
    
        const createdBy = mapUser(
          getField(plan, "created_by") ?? getField(plan, "createdBy")
        );
        const assignedToRaw = getArrayField(plan, "assigned_to", ["assignedTo"]);
        const assignedTo = (assignedToRaw ?? [])
          .map((user) => mapUser(user))
          .filter((user): user is Record<string, unknown> => Boolean(user));
    
        const planConfigurationCountFromPlan = getArrayField(
          plan,
          "configurations"
        )?.length;
        const configurationCount = include_configurations
          ? configurations.length
          : planConfigurationCountFromPlan ?? null;
    
        const hasMoreRuns =
          include_runs &&
          (runCount !== null
            ? runs_offset + runs.length < runCount
            : runs.length === runs_limit);
    
        const normalizedPlan = {
          id: extractId(plan) ?? resolvedTestPlanId,
          ...(normalizeString(getField<string>(plan, "title"))
            ? { title: normalizeString(getField<string>(plan, "title")) }
            : {}),
          ...(normalizeString(getField<string>(plan, "description"))
            ? { description: normalizeString(getField<string>(plan, "description")) }
            : {}),
          ...(status !== undefined
            ? { status, statusLabel: testPlanStatusCodeToLabel[status] ?? "Unknown" }
            : {}),
          ...(priority !== undefined
            ? {
                priority,
                priorityLabel: testPlanPriorityCodeToLabel[priority] ?? "Unknown",
              }
            : {}),
          ...(typeof getField(plan, "archived") === "boolean"
            ? { archived: getField(plan, "archived") }
            : {}),
          ...(testPlanFolderId !== undefined
            ? {
                testPlanFolder: {
                  id: testPlanFolderId,
                  ...(testPlanFolderTitle ? { title: testPlanFolderTitle } : {}),
                },
              }
            : {}),
          ...(releaseId !== undefined
            ? {
                release: {
                  id: releaseId,
                  ...(releaseTitle ? { title: releaseTitle } : {}),
                },
              }
            : {}),
          ...(createdBy ? { createdBy } : {}),
          assignedTo,
          ...(normalizeString(getField<string>(plan, "start_date"))
            ? { startDate: normalizeString(getField<string>(plan, "start_date")) }
            : {}),
          ...(normalizeString(getField<string>(plan, "end_date"))
            ? { endDate: normalizeString(getField<string>(plan, "end_date")) }
            : {}),
          ...(normalizeString(getField<string>(plan, "actual_start_date"))
            ? {
                actualStartDate: normalizeString(
                  getField<string>(plan, "actual_start_date")
                ),
              }
            : {}),
          ...(normalizeString(getField<string>(plan, "created_at"))
            ? { createdAt: normalizeString(getField<string>(plan, "created_at")) }
            : {}),
          ...(normalizeString(getField<string>(plan, "updated_at"))
            ? { updatedAt: normalizeString(getField<string>(plan, "updated_at")) }
            : {}),
          ...(normalizeString(getField<string>(plan, "last_run"))
            ? { lastRun: normalizeString(getField<string>(plan, "last_run")) }
            : {}),
          ...(getField(plan, "results") &&
          typeof getField(plan, "results") === "object" &&
          !Array.isArray(getField(plan, "results"))
            ? { results: getField(plan, "results") }
            : {}),
          ...(typeof getField(plan, "time_spent") === "number"
            ? { timeSpent: getField(plan, "time_spent") }
            : {}),
          ...(typeof getField(plan, "estimate") === "number"
            ? { estimate: getField(plan, "estimate") }
            : {}),
        };
    
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify(
                {
                  testPlan: normalizedPlan,
                  summary: {
                    included_test_cases_count: includedTestCasesCount,
                    configuration_count: configurationCount,
                    run_count: runCount,
                    current_progress_status: progress?.status ?? null,
                    execution_progress_percent:
                      progress?.executionProgressPercent ?? null,
                    pass_rate_percent: progress?.passRatePercent ?? null,
                  },
                  progress,
                  configurations,
                  runs,
                  runsPagination: include_runs
                    ? {
                        returned: runs.length,
                        limit: runs_limit,
                        offset: runs_offset,
                        hasMore: hasMoreRuns,
                      }
                    : null,
                },
                null,
                2
              ),
            },
          ],
        };
      } catch (error) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({
                error: {
                  code: "API_ERROR",
                  message: getErrorMessage(error),
                },
              }),
            },
          ],
        };
      }
    }
  • Schema definition for validating the input arguments for the get_test_plan tool.
    export const getTestPlanSchema = getTestPlanRegistrationSchema.refine(
      (value) => value.id !== undefined || value.title !== undefined,
      {
        message: "Either id or title is required.",
        path: ["id"],
      }
    );
  • Tool registration object containing metadata and input schema for the get_test_plan tool.
    export const getTestPlanTool = {
      name: "get_test_plan",
      description: `Fetch a single test plan with summary details:
    - Included test cases count
    - Test plan configurations
    - Test plan runs
    - Current execution progress status
    
    Required: id or title
    Optional: project_id, include_configurations, include_runs, runs_limit, runs_offset, runs_sort
    
    Example:
    {
      "id": 812,
      "project_id": 16
    }
    
    or
    
    {
      "title": "Release 3.0 Regression",
      "project_id": 16
    }`,
    
      inputSchema: {
        type: "object" as const,
        properties: {
          id: {
            oneOf: [{ type: "number" }, { type: "string" }],
            description: "Test plan ID to retrieve (numeric ID or title string)",
          },
          title: {
            type: "string",
            description: "Test plan title to retrieve (alternative to id)",
          },
          project_id: {
            type: "number",
            description: "Project ID (optional if default is set)",
          },
          include_configurations: {
            type: "boolean",
            default: true,
            description: "Include test plan configurations in the response",
          },
          include_runs: {
            type: "boolean",
            default: true,
            description: "Include test plan runs in the response",
          },
          runs_limit: {
            type: "number",
            minimum: 1,
            maximum: 100,
            default: 20,
            description: "Maximum number of runs to return (1-100, default: 20)",
          },
          runs_offset: {
            type: "number",
            minimum: 0,
            default: 0,
            description: "Number of runs to skip (default: 0)",
          },
          runs_sort: {
            type: "string",
            default: "id:desc",
            description: 'Run sort expression (default: "id:desc")',
          },
        },
        required: [],
      },
    };
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses what data is returned (test cases count, configurations, runs, status) and the pagination controls for runs, but fails to declare safety characteristics (read-only vs destructive) or error behaviors (e.g., what happens if neither id nor title is provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded purpose, bullet points for return values, clear required/optional grouping, and concrete examples. Listing optional parameters is slightly redundant given 100% schema coverage, but justified by the need to emphasize the id/title requirement logic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description adequately covers return values (listing four key data elements returned). For an 8-parameter tool with pagination controls, the description provides sufficient context, though it could explicitly state it returns a single object versus a collection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage (baseline 3), the description adds crucial semantic context: the logical OR requirement between 'id' and 'title' that isn't captured in the schema's empty 'required' array. The examples showing valid request bodies provide additional usage context beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Fetch a single test plan with summary details' - specific verb (Fetch), resource (test plan), and scope (single/summary). It distinguishes from sibling tools like list_test_plans (plural) and create/update/delete_test_plan through the 'Fetch' verb and 'single' qualifier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clarifies parameter requirements ('Required: id or title') and provides JSON examples, but lacks explicit guidance on when to use this versus list_test_plans (e.g., 'use this when you know the specific ID/title, otherwise search with list_test_plans'). No prerequisites or error scenarios are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TCSoftInc/testcollab-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server