Skip to main content
Glama
ampcome-mcps

CircleCI MCP Server

by ampcome-mcps

recommend_prompt_template_tests

Generate test cases for prompt templates to validate functionality and ensure proper integration with AI models.

Instructions

About this tool:

  • This tool is part of a toolchain that generates and provides test cases for a prompt template.

  • This tool generates an array of recommended tests for a given prompt template.

Parameters:

  • params: object

    • promptTemplate: string (the prompt template to be tested)

    • contextSchema: object (the context schema that defines the expected input parameters for the prompt template)

    • promptOrigin: "codebase" | "requirements" (indicates whether the prompt comes from an existing codebase or from new requirements)

    • model: string (the model that the prompt template will be tested against)

Example usage: { "params": { "promptTemplate": "The user wants a bedtime story about {{topic}} for a person of age {{age}} years old. Please craft a captivating tale that captivates their imagination and provides a delightful bedtime experience.", "contextSchema": { "topic": "string", "age": "number" }, "promptOrigin": "codebase" } }

The tool will return a structured array of test cases that can be used to test the prompt template.

Tool output instructions: - The tool will return a recommendedTests array that can be used to test the prompt template.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsNo

Implementation Reference

  • The main handler function that executes the tool logic. It invokes the Circlet client to generate recommended tests and provides instructions for saving the prompt template, schema, and tests to files.
    export const recommendPromptTemplateTests: ToolCallback<{
      params: typeof recommendPromptTemplateTestsInputSchema;
    }> = async (args) => {
      const { template, contextSchema, promptOrigin } = args.params;
    
      const circlet = new CircletClient();
      const result = await circlet.circlet.recommendPromptTemplateTests({
        template,
        contextSchema,
      });
    
      const baseInstructions = `${recommendedTestsKey}: ${JSON.stringify(result, null, 2)}
    
    NEXT STEP:
    - Immediately save the \`${promptTemplateKey}\`, \`${contextSchemaKey}\`, and \`${recommendedTestsKey}\` to a single file containing the prompt template, context schema, and tests in a simple structured format (e.g. YAML, JSON, or whatever is most appropriate for the language of the current repository).
      - The ${fileExtension} file should be named in the format '${fileNameTemplate}' (e.g. '${fileNameExample1}', '${fileNameExample2}', '${fileNameExample3}', etc.)
      - The file should have the following keys:
        - \`name\`: string (the name of the prompt template)
        - \`description\`: string (a description of the prompt template)
        - \`version\`: string (the semantic version of the prompt template, e.g. "1.0.0")
        - \`${promptOriginKey}\`: string (the origin of the prompt template, e.g. "${PromptOrigin.codebase}" or "${PromptOrigin.requirements}")
        - \`${modelKey}\`: string (the model used for generating the prompt template and tests)
        - \`${temperatureKey}\`: number (the temperature used for generating the prompt template and tests)
        - \`template\`: multi-line string (the prompt template)
        - \`${contextSchemaKey}\`: object (the \`${contextSchemaKey}\`)
        - \`tests\`: array of objects (based on the \`${recommendedTestsKey}\`)
          - \`name\`: string (a relevant "Title Case" name for the test, based on the content of the \`${recommendedTestsKey}\` array item)
          - \`description\`: string (taken directly from string array item in \`${recommendedTestsKey}\`)
        - \`sampleInputs\`: object[] (the sample inputs for the \`${promptTemplateKey}\` and any tests within \`${recommendedTestsKey}\`)
    
    RULES FOR SAVING FILES:
    - The files should be saved in the \`${promptsOutputDirectory}\` directory at the root of the project.
    - Files should be written with respect to the prevailing conventions of the current repository.
    - The prompt files should be documented with a README description of what they do, and how they work.
      - If a README already exists in the \`${promptsOutputDirectory}\` directory, update it with the new prompt template information.
      - If a README does not exist in the \`${promptsOutputDirectory}\` directory, create one.
    - The files should be formatted using the user's preferred conventions.
    - Only save the following files (and nothing else):
      - \`${fileNameTemplate}\`
      - \`README.md\``;
    
      const integrationInstructions =
        promptOrigin === PromptOrigin.codebase
          ? `
    
    FINALLY, ONCE ALL THE FILES ARE SAVED:
    1. Ask user if they want to integrate the new templates into their app as a more tested and trustworthy replacement for their pre-existing prompt implementations. (Yes/No)
    2. If yes, import the \`${promptsOutputDirectory}\` files into their app, following codebase conventions
    3. Only use existing dependencies - no new imports
    4. Ensure integration is error-free and builds successfully`
          : '';
    
      return {
        content: [
          {
            type: 'text',
            text: baseInstructions + integrationInstructions,
          },
        ],
      };
    };
  • Input schema using Zod for validating the tool's parameters: template, contextSchema, promptOrigin, model, temperature.
    export const recommendPromptTemplateTestsInputSchema = z.object({
      template: z
        .string()
        .describe(
          `The prompt template to be tested. Use the \`promptTemplate\` from the latest \`${PromptWorkbenchToolName.create_prompt_template}\` tool output (if available).`,
        ),
      contextSchema: z
        .record(z.string(), z.string())
        .describe(
          `The context schema that defines the expected input parameters for the prompt template. Use the \`contextSchema\` from the latest \`${PromptWorkbenchToolName.create_prompt_template}\` tool output.`,
        ),
      promptOrigin: z
        .nativeEnum(PromptOrigin)
        .describe(
          `The origin of the prompt template, indicating where it came from (e.g. "${PromptOrigin.codebase}" or "${PromptOrigin.requirements}").`,
        ),
      model: z
        .string()
        .default(defaultModel)
        .describe(
          `The model to use for generating actual prompt outputs for testing. Defaults to ${defaultModel}.`,
        ),
      temperature: z
        .number()
        .default(defaultTemperature)
        .describe(
          `The temperature of the prompt template. Explicitly specify the temperature if it can be inferred from the codebase. Otherwise, defaults to ${defaultTemperature}.`,
        ),
    });
  • Tool object registration defining name, description, and inputSchema for the recommend_prompt_template_tests tool.
    export const recommendPromptTemplateTestsTool = {
      name: PromptWorkbenchToolName.recommend_prompt_template_tests,
      description: `
      About this tool:
      - This tool is part of a toolchain that generates and provides test cases for a prompt template.
      - This tool generates an array of recommended tests for a given prompt template.
    
      Parameters:
      - ${paramsKey}: object
        - ${promptTemplateKey}: string (the prompt template to be tested)
        - ${contextSchemaKey}: object (the context schema that defines the expected input parameters for the prompt template)
        - ${promptOriginKey}: "${PromptOrigin.codebase}" | "${PromptOrigin.requirements}" (indicates whether the prompt comes from an existing codebase or from new requirements)
        - ${modelKey}: string (the model that the prompt template will be tested against)
        
      Example usage:
      {
        "${paramsKey}": {
          "${promptTemplateKey}": "The user wants a bedtime story about {{topic}} for a person of age {{age}} years old. Please craft a captivating tale that captivates their imagination and provides a delightful bedtime experience.",
          "${contextSchemaKey}": {
            "topic": "string",
            "age": "number"
          },
          "${promptOriginKey}": "${PromptOrigin.codebase}"
        }
      }
    
      The tool will return a structured array of test cases that can be used to test the prompt template.
    
      Tool output instructions:
        - The tool will return a ${recommendedTestsVar} array that can be used to test the prompt template.
      `,
      inputSchema: recommendPromptTemplateTestsInputSchema,
    };
  • Central registration mapping the tool name 'recommend_prompt_template_tests' to its handler function in the CCI_HANDLERS object.
    export const CCI_HANDLERS = {
      get_build_failure_logs: getBuildFailureLogs,
      find_flaky_tests: getFlakyTestLogs,
      get_latest_pipeline_status: getLatestPipelineStatus,
      get_job_test_results: getJobTestResults,
      config_helper: configHelper,
      create_prompt_template: createPromptTemplate,
      recommend_prompt_template_tests: recommendPromptTemplateTests,
      run_pipeline: runPipeline,
      list_followed_projects: listFollowedProjects,
      run_evaluation_tests: runEvaluationTests,
      rerun_workflow: rerunWorkflow,
      analyze_diff: analyzeDiff,
      run_rollback_pipeline: runRollbackPipeline,
    } satisfies ToolHandlers;
  • Central registration including the recommendPromptTemplateTestsTool in the CCI_TOOLS array.
    export const CCI_TOOLS = [
      getBuildFailureLogsTool,
      getFlakyTestLogsTool,
      getLatestPipelineStatusTool,
      getJobTestResultsTool,
      configHelperTool,
      createPromptTemplateTool,
      recommendPromptTemplateTestsTool,
      runPipelineTool,
      listFollowedProjectsTool,
      runEvaluationTestsTool,
      rerunWorkflowTool,
      analyzeDiffTool,
      runRollbackPipelineTool,
    ];
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool 'generates an array of recommended tests' and specifies the output as a 'structured array of test cases,' which adds some behavioral context. However, it doesn't cover important aspects like whether this is a read-only operation, potential side effects, error handling, or performance considerations. The description provides basic behavior but lacks depth for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections like 'About this tool,' 'Parameters,' 'Example usage,' and 'Tool output instructions,' making it easy to scan. It's appropriately sized with no redundant information, though the example usage could be more concise. Every sentence adds value, and it's front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (1 parameter with nested objects, no output schema, and no annotations), the description is moderately complete. It explains the purpose, parameters, and output format, but lacks details on behavioral traits, error cases, or integration with sibling tools. Without annotations or output schema, the description should do more to cover all aspects, but it provides a baseline understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description lists and briefly explains all parameters (promptTemplate, contextSchema, promptOrigin, model) in a 'Parameters' section, adding meaning beyond the input schema. Since schema description coverage is 0% (based on context signals), the description compensates well by documenting the parameters. However, it doesn't fully explain the semantics of 'contextSchema' or 'promptOrigin' in detail, and there's a discrepancy: the description lists 'params' as an object with specific properties, but the schema shows 'params' with different property names (e.g., 'template' vs. 'promptTemplate').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'generates an array of recommended tests for a given prompt template.' It specifies the verb ('generates') and resource ('recommended tests'), though it doesn't explicitly differentiate from sibling tools like 'run_evaluation_tests' or 'find_flaky_tests' which might have overlapping testing functions. The description is specific but lacks sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some context by mentioning it's 'part of a toolchain that generates and provides test cases for a prompt template,' implying usage in a testing workflow. However, it doesn't explicitly state when to use this tool versus alternatives like 'run_evaluation_tests' or 'create_prompt_template,' nor does it specify prerequisites or exclusions. The guidance is implied but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ampcome-mcps/circleci-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server