prem_chat_with_template
Generate chat responses using predefined Prem AI prompt templates by providing template IDs and parameter values for consistent AI interactions.
Instructions
Chat using a predefined Prem AI prompt template
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| template_id | Yes | ID of the prompt template to use | |
| params | Yes | Parameters to fill in the template | |
| model | No | Optional model to use | |
| temperature | No | Optional temperature parameter | |
| max_tokens | No | Optional maximum tokens to generate |
Implementation Reference
- src/index.ts:186-234 (registration)Registration of the 'prem_chat_with_template' tool, including schema and inline handler function.this.server.tool( "prem_chat_with_template", "Chat using a predefined Prem AI prompt template", { template_id: z.string().describe("ID of the prompt template to use"), params: z.record(z.string()).describe("Parameters to fill in the template"), model: z.string().optional().describe("Optional model to use"), temperature: z.number().optional().describe("Optional temperature parameter"), max_tokens: z.number().optional().describe("Optional maximum tokens to generate") }, async ({ template_id, params, model, temperature, max_tokens }) => { const requestId = `template-${Date.now()}-${Math.random().toString(36).substring(2, 7)}`; this.activeRequests.add(requestId); try { const chatRequest = { project_id: PROJECT_ID as string, messages: [{ role: "user", template_id, params }], ...(model && { model }), ...(temperature && { temperature }), ...(max_tokens && { max_tokens }) }; const response = await this.client.chat.completions.create(chatRequest as any); const responseData = 'choices' in response ? response : { choices: [] }; return { content: [{ type: "text" as const, text: JSON.stringify(responseData, null, 2) }] }; } catch (error) { return { content: [{ type: "text" as const, text: `Template chat error: ${error instanceof Error ? error.message : String(error)}` }], isError: true }; } finally { this.activeRequests.delete(requestId); } } );
- src/index.ts:196-233 (handler)Handler function for 'prem_chat_with_template' that constructs a chat request using the provided template_id and params, calls Prem AI's chat.completions.create, and returns the response.async ({ template_id, params, model, temperature, max_tokens }) => { const requestId = `template-${Date.now()}-${Math.random().toString(36).substring(2, 7)}`; this.activeRequests.add(requestId); try { const chatRequest = { project_id: PROJECT_ID as string, messages: [{ role: "user", template_id, params }], ...(model && { model }), ...(temperature && { temperature }), ...(max_tokens && { max_tokens }) }; const response = await this.client.chat.completions.create(chatRequest as any); const responseData = 'choices' in response ? response : { choices: [] }; return { content: [{ type: "text" as const, text: JSON.stringify(responseData, null, 2) }] }; } catch (error) { return { content: [{ type: "text" as const, text: `Template chat error: ${error instanceof Error ? error.message : String(error)}` }], isError: true }; } finally { this.activeRequests.delete(requestId); } }
- src/index.ts:189-195 (schema)Zod schema defining the input parameters for the 'prem_chat_with_template' tool.{ template_id: z.string().describe("ID of the prompt template to use"), params: z.record(z.string()).describe("Parameters to fill in the template"), model: z.string().optional().describe("Optional model to use"), temperature: z.number().optional().describe("Optional temperature parameter"), max_tokens: z.number().optional().describe("Optional maximum tokens to generate") },