Skip to main content
Glama
ampcome-mcps

CircleCI MCP Server

by ampcome-mcps

create_prompt_template

Generate structured prompt templates from requirements or existing prompts for testing AI applications in CircleCI workflows.

Instructions

ABOUT THIS TOOL:

  • This tool is part of a toolchain that generates and provides test cases for a prompt template.

  • This tool helps an AI assistant to generate a prompt template based on one of the following:

    1. feature requirements defined by a user - in which case the tool will generate a new prompt template based on the feature requirements.

    2. a pre-existing prompt or prompt template that a user wants to test, evaluate, or modify - in which case the tool will convert it into a more structured and testable prompt template while leaving the original prompt language relatively unchanged.

  • This tool will return a structured prompt template (e.g. template) along with a context schema (e.g. contextSchema) that defines the expected input parameters for the prompt template.

  • In some cases, a user will want to add test coverage for ALL of the prompts in a given application. In these cases, the AI agent should use this tool to generate a prompt template for each prompt in the application, and should check the entire application for AI prompts that are not already covered by a prompt template in the ./prompts directory.

WHEN SHOULD THIS TOOL BE TRIGGERED?

  • This tool should be triggered whenever the user provides requirements for a new AI-enabled application or a new AI-enabled feature of an existing application (i.e. one that requires a prompt request to an LLM or any AI model).

  • This tool should also be triggered if the user provides a pre-existing prompt or prompt template from their codebase that they want to test, evaluate, or modify.

  • This tool should be triggered even if there are pre-existing files in the ./prompts directory with the <relevant-name>.prompt.yml convention (e.g. bedtime-story-generator.prompt.yml, plant-care-assistant.prompt.yml, customer-support-chatbot.prompt.yml, etc.). Similar files should NEVER be generated directly by the AI agent. Instead, the AI agent should use this tool to first generate a new prompt template.

PARAMETERS:

  • params: object

    • prompt: string (the feature requirements or pre-existing prompt/prompt template that will be used to generate a prompt template. Can be a multi-line string.)

    • promptOrigin: "codebase" | "requirements" (indicates whether the prompt comes from an existing codebase or from new requirements)

    • model: string (the model that the prompt template will be tested against. Explicitly specify the model if it can be inferred from the codebase. Otherwise, defaults to gpt-4.1-mini.)

    • temperature: number (the temperature of the prompt template. Explicitly specify the temperature if it can be inferred from the codebase. Otherwise, defaults to 1.)

EXAMPLE USAGE (from new requirements): { "params": { "prompt": "Create an app that takes any topic and an age (in years), then renders a 1-minute bedtime story for a person of that age.", "promptOrigin": "requirements" "model": "gpt-4.1-mini" "temperature": 1.0 } }

EXAMPLE USAGE (from pre-existing prompt/prompt template in codebase): { "params": { "prompt": "The user wants a bedtime story about {{topic}} for a person of age {{age}} years old. Please craft a captivating tale that captivates their imagination and provides a delightful bedtime experience.", "promptOrigin": "codebase" "model": "claude-3-5-sonnet-latest" "temperature": 0.7 } }

TOOL OUTPUT INSTRUCTIONS:

  • The tool will return...

    • a template that reformulates the user's prompt into a more structured format.

    • a contextSchema that defines the expected input parameters for the template.

    • a promptOrigin that indicates whether the prompt comes from an existing prompt or prompt template in the user's codebase or from new requirements.

  • The tool output -- the template, contextSchema, and promptOrigin -- will also be used as input to the recommend_prompt_template_tests tool to generate a list of recommended tests that can be used to test the prompt template.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsNo

Implementation Reference

  • Core handler function that invokes CircletClient to create a structured prompt template from input prompt and origin, then returns formatted text output with extracted keys (promptOrigin, template, contextSchema, model) and instructions for next tool call.
    export const createPromptTemplate: ToolCallback<{
      params: typeof createPromptTemplateInputSchema;
    }> = async (args) => {
      const { prompt, promptOrigin, model } = args.params;
    
      const circlet = new CircletClient();
      const promptObject = await circlet.circlet.createPromptTemplate(
        prompt,
        promptOrigin,
      );
    
      return {
        content: [
          {
            type: 'text',
            text: `${promptOriginKey}: ${promptOrigin}
    
    ${promptTemplateKey}: ${promptObject.template}
    
    ${contextSchemaKey}: ${JSON.stringify(promptObject.contextSchema, null, 2)}
    
    ${modelKey}: ${model}
    
    NEXT STEP:
    - Immediately call the \`${PromptWorkbenchToolName.recommend_prompt_template_tests}\` tool with:
      - template: the \`${promptTemplateKey}\` above
      - ${contextSchemaKey}: the \`${contextSchemaKey}\` above
      - ${promptOriginKey}: the \`${promptOriginKey}\` above
      - ${modelKey}: the \`${modelKey}\` above
      - ${temperatureKey}: the \`${temperatureKey}\` above
    `,
          },
        ],
      };
    };
  • Zod input schema defining parameters: prompt (string), promptOrigin (enum), model (string, default), temperature (number, default).
    export const createPromptTemplateInputSchema = z.object({
      prompt: z
        .string()
        .describe(
          "The user's application, feature, or product requirements that will be used to generate a prompt template. Alternatively, a pre-existing prompt or prompt template can be provided if a user wants to test, evaluate, or modify it. (Can be a multi-line string.)",
        ),
      promptOrigin: z
        .nativeEnum(PromptOrigin)
        .describe(
          `The origin of the prompt - either "${PromptOrigin.codebase}" for existing prompts from the codebase, or "${PromptOrigin.requirements}" for new prompts from requirements.`,
        ),
      model: z
        .string()
        .default(defaultModel)
        .describe(
          `The model that the prompt template will be tested against. Explicitly specify the model if it can be inferred from the codebase. Otherwise, defaults to \`${defaultModel}\`.`,
        ),
      temperature: z
        .number()
        .default(defaultTemperature)
        .describe(
          `The temperature of the prompt template. Explicitly specify the temperature if it can be inferred from the codebase. Otherwise, defaults to ${defaultTemperature}.`,
        ),
    });
  • Tool object registration defining name 'create_prompt_template', detailed description, and references inputSchema.
    export const createPromptTemplateTool = {
      name: PromptWorkbenchToolName.create_prompt_template,
      description: `
      ABOUT THIS TOOL:
      - This tool is part of a toolchain that generates and provides test cases for a prompt template.
      - This tool helps an AI assistant to generate a prompt template based on one of the following:
        1. feature requirements defined by a user - in which case the tool will generate a new prompt template based on the feature requirements.
        2. a pre-existing prompt or prompt template that a user wants to test, evaluate, or modify - in which case the tool will convert it into a more structured and testable prompt template while leaving the original prompt language relatively unchanged.
      - This tool will return a structured prompt template (e.g. \`${templateKey}\`) along with a context schema (e.g. \`${contextSchemaKey}\`) that defines the expected input parameters for the prompt template.
      - In some cases, a user will want to add test coverage for ALL of the prompts in a given application. In these cases, the AI agent should use this tool to generate a prompt template for each prompt in the application, and should check the entire application for AI prompts that are not already covered by a prompt template in the \`${promptsOutputDirectory}\` directory.
    
      WHEN SHOULD THIS TOOL BE TRIGGERED?
      - This tool should be triggered whenever the user provides requirements for a new AI-enabled application or a new AI-enabled feature of an existing  application (i.e. one that requires a prompt request to an LLM or any AI model).
      - This tool should also be triggered if the user provides a pre-existing prompt or prompt template from their codebase that they want to test, evaluate, or modify.
      - This tool should be triggered even if there are pre-existing files in the \`${promptsOutputDirectory}\` directory with the \`${fileNameTemplate}\` convention (e.g. \`${fileNameExample1}\`, \`${fileNameExample2}\`, \`${fileNameExample3}\`, etc.). Similar files should NEVER be generated directly by the AI agent. Instead, the AI agent should use this tool to first generate a new prompt template.
    
      PARAMETERS:
      - ${paramsKey}: object
        - ${promptKey}: string (the feature requirements or pre-existing prompt/prompt template that will be used to generate a prompt template. Can be a multi-line string.)
        - ${promptOriginKey}: "${PromptOrigin.codebase}" | "${PromptOrigin.requirements}" (indicates whether the prompt comes from an existing codebase or from new requirements)
        - ${modelKey}: string (the model that the prompt template will be tested against. Explicitly specify the model if it can be inferred from the codebase. Otherwise, defaults to \`${defaultModel}\`.)
        - ${temperatureKey}: number (the temperature of the prompt template. Explicitly specify the temperature if it can be inferred from the codebase. Otherwise, defaults to ${defaultTemperature}.)
    
      EXAMPLE USAGE (from new requirements):
      {
        "${paramsKey}": {
          "${promptKey}": "Create an app that takes any topic and an age (in years), then renders a 1-minute bedtime story for a person of that age.",
          "${promptOriginKey}": "${PromptOrigin.requirements}"
          "${modelKey}": "${defaultModel}"
          "${temperatureKey}": 1.0
        }
      }
    
      EXAMPLE USAGE (from pre-existing prompt/prompt template in codebase):
      {
        "${paramsKey}": {
          "${promptKey}": "The user wants a bedtime story about {{topic}} for a person of age {{age}} years old. Please craft a captivating tale that captivates their imagination and provides a delightful bedtime experience.",
          "${promptOriginKey}": "${PromptOrigin.codebase}"
          "${modelKey}": "claude-3-5-sonnet-latest"
          "${temperatureKey}": 0.7
        }
      }
    
      TOOL OUTPUT INSTRUCTIONS:
      - The tool will return...
        - a \`${templateKey}\` that reformulates the user's prompt into a more structured format.
        - a \`${contextSchemaKey}\` that defines the expected input parameters for the template.
        - a \`${promptOriginKey}\` that indicates whether the prompt comes from an existing prompt or prompt template in the user's codebase or from new requirements.
      - The tool output -- the \`${templateKey}\`, \`${contextSchemaKey}\`, and \`${promptOriginKey}\` -- will also be used as input to the \`${PromptWorkbenchToolName.recommend_prompt_template_tests}\` tool to generate a list of recommended tests that can be used to test the prompt template.
      `,
      inputSchema: createPromptTemplateInputSchema,
    };
  • Registration of createPromptTemplateTool in the main CCI_TOOLS array for MCP tools.
    export const CCI_TOOLS = [
      getBuildFailureLogsTool,
      getFlakyTestLogsTool,
      getLatestPipelineStatusTool,
      getJobTestResultsTool,
      configHelperTool,
      createPromptTemplateTool,
      recommendPromptTemplateTestsTool,
      runPipelineTool,
      listFollowedProjectsTool,
      runEvaluationTestsTool,
      rerunWorkflowTool,
      analyzeDiffTool,
      runRollbackPipelineTool,
    ];
  • Mapping of tool name 'create_prompt_template' to its handler function in CCI_HANDLERS.
    export const CCI_HANDLERS = {
      get_build_failure_logs: getBuildFailureLogs,
      find_flaky_tests: getFlakyTestLogs,
      get_latest_pipeline_status: getLatestPipelineStatus,
      get_job_test_results: getJobTestResults,
      config_helper: configHelper,
      create_prompt_template: createPromptTemplate,
      recommend_prompt_template_tests: recommendPromptTemplateTests,
      run_pipeline: runPipeline,
      list_followed_projects: listFollowedProjects,
      run_evaluation_tests: runEvaluationTests,
      rerun_workflow: rerunWorkflow,
      analyze_diff: analyzeDiff,
      run_rollback_pipeline: runRollbackPipeline,
    } satisfies ToolHandlers;
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (generates structured prompt templates and context schemas), what it returns, and how the output connects to other tools (specifically 'recommend_prompt_template_tests'). It explains the tool's role in a broader workflow and provides concrete examples. The only minor gap is lack of explicit mention about error handling or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (ABOUT THIS TOOL, WHEN SHOULD THIS TOOL BE TRIGGERED, PARAMETERS, etc.), but it's quite lengthy with some redundancy. While most content is valuable, some information (like the detailed examples) could potentially be streamlined. It's front-loaded with purpose information, but the overall length reduces conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (generating structured templates from varied inputs) and the absence of both annotations and output schema, the description does an excellent job of explaining what the tool does, when to use it, what parameters mean, and what outputs to expect. It even connects to sibling tools. The main gap is the lack of explicit output schema documentation, though the description does outline what the tool returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must fully compensate. It provides detailed parameter information in the 'PARAMETERS' section, explaining each parameter's purpose, possible values, and defaults. The examples further clarify usage. This adds substantial value beyond the bare schema, though it doesn't explicitly map parameters to the two usage scenarios mentioned earlier.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'generate a prompt template based on feature requirements or pre-existing prompts.' It specifies the verb ('generate'), resource ('prompt template'), and distinguishes between two distinct input scenarios. This is specific and comprehensive, going beyond a simple restatement of the tool name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit, detailed guidance on when to trigger the tool: for new AI application/feature requirements or for testing/evaluating/modifying pre-existing prompts. It also includes important exclusions, stating that similar files should 'NEVER be generated directly by the AI agent' and must use this tool instead. This offers clear alternatives and boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ampcome-mcps/circleci-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server