Skip to main content
Glama

prem_chat_with_template

Generate chat responses using predefined Prem AI prompt templates by providing template IDs and parameter values for consistent AI interactions.

Instructions

Chat using a predefined Prem AI prompt template

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
template_idYesID of the prompt template to use
paramsYesParameters to fill in the template
modelNoOptional model to use
temperatureNoOptional temperature parameter
max_tokensNoOptional maximum tokens to generate

Implementation Reference

  • src/index.ts:186-234 (registration)
    Registration of the 'prem_chat_with_template' tool, including schema and inline handler function.
    this.server.tool( "prem_chat_with_template", "Chat using a predefined Prem AI prompt template", { template_id: z.string().describe("ID of the prompt template to use"), params: z.record(z.string()).describe("Parameters to fill in the template"), model: z.string().optional().describe("Optional model to use"), temperature: z.number().optional().describe("Optional temperature parameter"), max_tokens: z.number().optional().describe("Optional maximum tokens to generate") }, async ({ template_id, params, model, temperature, max_tokens }) => { const requestId = `template-${Date.now()}-${Math.random().toString(36).substring(2, 7)}`; this.activeRequests.add(requestId); try { const chatRequest = { project_id: PROJECT_ID as string, messages: [{ role: "user", template_id, params }], ...(model && { model }), ...(temperature && { temperature }), ...(max_tokens && { max_tokens }) }; const response = await this.client.chat.completions.create(chatRequest as any); const responseData = 'choices' in response ? response : { choices: [] }; return { content: [{ type: "text" as const, text: JSON.stringify(responseData, null, 2) }] }; } catch (error) { return { content: [{ type: "text" as const, text: `Template chat error: ${error instanceof Error ? error.message : String(error)}` }], isError: true }; } finally { this.activeRequests.delete(requestId); } } );
  • Handler function for 'prem_chat_with_template' that constructs a chat request using the provided template_id and params, calls Prem AI's chat.completions.create, and returns the response.
    async ({ template_id, params, model, temperature, max_tokens }) => { const requestId = `template-${Date.now()}-${Math.random().toString(36).substring(2, 7)}`; this.activeRequests.add(requestId); try { const chatRequest = { project_id: PROJECT_ID as string, messages: [{ role: "user", template_id, params }], ...(model && { model }), ...(temperature && { temperature }), ...(max_tokens && { max_tokens }) }; const response = await this.client.chat.completions.create(chatRequest as any); const responseData = 'choices' in response ? response : { choices: [] }; return { content: [{ type: "text" as const, text: JSON.stringify(responseData, null, 2) }] }; } catch (error) { return { content: [{ type: "text" as const, text: `Template chat error: ${error instanceof Error ? error.message : String(error)}` }], isError: true }; } finally { this.activeRequests.delete(requestId); } }
  • Zod schema defining the input parameters for the 'prem_chat_with_template' tool.
    { template_id: z.string().describe("ID of the prompt template to use"), params: z.record(z.string()).describe("Parameters to fill in the template"), model: z.string().optional().describe("Optional model to use"), temperature: z.number().optional().describe("Optional temperature parameter"), max_tokens: z.number().optional().describe("Optional maximum tokens to generate") },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ucalyptus/prem-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server