generateTextWithOpenAI
Generate text content using OpenAI models to create prompts, descriptions, and responses for 3D design projects within the Spline environment.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | Prompt for text generation | |
| model | No | OpenAI model to use | gpt-3.5-turbo |
| maxTokens | No | Maximum number of tokens to generate | |
| temperature | No | Temperature for text generation (0-2) |
Implementation Reference
- src/tools/api-webhook-tools.js:344-371 (handler)The core handler function for the generateTextWithOpenAI MCP tool. It receives the input parameters, calls the openaiClient.generateText helper, and formats the response or error for the MCP protocol.async ({ prompt, model, maxTokens, temperature }) => { try { const generatedText = await openaiClient.generateText( prompt, model, maxTokens, temperature ); return { content: [ { type: 'text', text: generatedText } ] }; } catch (error) { return { content: [ { type: 'text', text: `Error generating text: ${error.message}` } ], isError: true }; }
- Zod schema defining the input parameters for the generateTextWithOpenAI tool: prompt (required string), model (enum with default), maxTokens (number 1-4096 default 256), temperature (number 0-2 default 0.7).{ prompt: z.string().min(1).describe('Prompt for text generation'), model: z.enum(['gpt-3.5-turbo', 'gpt-4-turbo', 'gpt-4o-mini', 'gpt-4o']) .default('gpt-3.5-turbo').describe('OpenAI model to use'), maxTokens: z.number().min(1).max(4096).default(256) .describe('Maximum number of tokens to generate'), temperature: z.number().min(0).max(2).default(0.7) .describe('Temperature for text generation (0-2)'), },
- src/tools/api-webhook-tools.js:333-373 (registration)The server.tool registration call that defines and registers the generateTextWithOpenAI tool on the MCP server, including the name, schema, and handler function.server.tool( 'generateTextWithOpenAI', { prompt: z.string().min(1).describe('Prompt for text generation'), model: z.enum(['gpt-3.5-turbo', 'gpt-4-turbo', 'gpt-4o-mini', 'gpt-4o']) .default('gpt-3.5-turbo').describe('OpenAI model to use'), maxTokens: z.number().min(1).max(4096).default(256) .describe('Maximum number of tokens to generate'), temperature: z.number().min(0).max(2).default(0.7) .describe('Temperature for text generation (0-2)'), }, async ({ prompt, model, maxTokens, temperature }) => { try { const generatedText = await openaiClient.generateText( prompt, model, maxTokens, temperature ); return { content: [ { type: 'text', text: generatedText } ] }; } catch (error) { return { content: [ { type: 'text', text: `Error generating text: ${error.message}` } ], isError: true }; } } );
- src/utils/openai-client.js:31-52 (helper)Supporting utility method in OpenAIClient class that performs the actual HTTP POST request to OpenAI's chat/completions API, extracts the generated text, and handles errors.async generateText(prompt, model = 'gpt-3.5-turbo', maxTokens = 100, temperature = 0.7) { try { const response = await this.client.post('/chat/completions', { model, messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: prompt } ], max_tokens: maxTokens, temperature, }); return response.data.choices[0].message.content.trim(); } catch (error) { console.error(`OpenAI API error: ${error.message}`); if (error.response) { console.error(`Status: ${error.response.status}`); console.error(`Data: ${JSON.stringify(error.response.data)}`); } throw error; } }