Skip to main content
Glama

test_ai_action

Test AI prompts in isolation to evaluate responses before integrating into workflows. Submit prompt templates with variables and expected output structures for validation.

Instructions

Test an AI prompt in isolation without creating a workflow or execution. Pass a prompt template with {{variable}} syntax and variable values to run the AI and see the response. Useful for tuning prompts and response structures before adding an AI step to a workflow. Example: test_ai_action("Analyze this company: {{company}}", { company: "Stripe" }, { score: "number 0-100", summary: "string" })

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
templateYesPrompt template with {{variable}} placeholders
variablesNoVariable values to substitute in the template
responseStructureNoExpected JSON output shape (e.g., { score: "number 0-100", summary: "string" })
responseTypeNoResponse format: "json" (default) or "text"
systemPromptNoOptional system instructions for the AI

Implementation Reference

  • The MCP tool 'test_ai_action' is registered here. It takes a template, variables, and response structure as input and delegates the execution to the 'testAiAction' method of the 'client' object.
        server.tool(
            'test_ai_action',
            `Test an AI prompt in isolation without creating a workflow or execution.
    Pass a prompt template with {{variable}} syntax and variable values to run the AI and see the response.
    Useful for tuning prompts and response structures before adding an AI step to a workflow.
    Example: test_ai_action("Analyze this company: {{company}}", { company: "Stripe" }, { score: "number 0-100", summary: "string" })`,
            {
                template: z.string().describe('Prompt template with {{variable}} placeholders'),
                variables: z.record(z.string(), z.any()).optional().describe('Variable values to substitute in the template'),
                responseStructure: z.record(z.string(), z.any()).optional().describe('Expected JSON output shape (e.g., { score: "number 0-100", summary: "string" })'),
                responseType: z.enum(['json', 'text']).optional().describe('Response format: "json" (default) or "text"'),
                systemPrompt: z.string().optional().describe('Optional system instructions for the AI'),
            },
            async ({ template, variables, responseStructure, responseType, systemPrompt }, extra) => {
                const client = clientFactory(extra);
                const result = await client.testAiAction(template, variables, responseStructure, {
                    responseType,
                    systemPrompt,
                });
                return {
                    content: [{
                        type: 'text' as const,
                        text: JSON.stringify(result, null, 2),
                    }],
                };
            }
        );
  • The 'testAiAction' method on the AgentledClient class performs the actual API call to the '/step/test-ai' endpoint.
    async testAiAction(
        template: string,
        variables?: Record<string, any>,
        responseStructure?: Record<string, any>,
        options?: { responseType?: string; systemPrompt?: string }
    ) {
        return this.request('/step/test-ai', {
            method: 'POST',
            body: JSON.stringify({
                template,
                variables,
                responseStructure,
                responseType: options?.responseType,
                systemPrompt: options?.systemPrompt,
            }),
        });
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Agentled/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server