Skip to main content
Glama

generateTextWithOpenAI

Generate text content using OpenAI models to create prompts, descriptions, and responses for 3D design projects within the Spline environment.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesPrompt for text generation
modelNoOpenAI model to usegpt-3.5-turbo
maxTokensNoMaximum number of tokens to generate
temperatureNoTemperature for text generation (0-2)

Implementation Reference

  • The core handler function for the generateTextWithOpenAI MCP tool. It receives the input parameters, calls the openaiClient.generateText helper, and formats the response or error for the MCP protocol.
    async ({ prompt, model, maxTokens, temperature }) => { try { const generatedText = await openaiClient.generateText( prompt, model, maxTokens, temperature ); return { content: [ { type: 'text', text: generatedText } ] }; } catch (error) { return { content: [ { type: 'text', text: `Error generating text: ${error.message}` } ], isError: true }; }
  • Zod schema defining the input parameters for the generateTextWithOpenAI tool: prompt (required string), model (enum with default), maxTokens (number 1-4096 default 256), temperature (number 0-2 default 0.7).
    { prompt: z.string().min(1).describe('Prompt for text generation'), model: z.enum(['gpt-3.5-turbo', 'gpt-4-turbo', 'gpt-4o-mini', 'gpt-4o']) .default('gpt-3.5-turbo').describe('OpenAI model to use'), maxTokens: z.number().min(1).max(4096).default(256) .describe('Maximum number of tokens to generate'), temperature: z.number().min(0).max(2).default(0.7) .describe('Temperature for text generation (0-2)'), },
  • The server.tool registration call that defines and registers the generateTextWithOpenAI tool on the MCP server, including the name, schema, and handler function.
    server.tool( 'generateTextWithOpenAI', { prompt: z.string().min(1).describe('Prompt for text generation'), model: z.enum(['gpt-3.5-turbo', 'gpt-4-turbo', 'gpt-4o-mini', 'gpt-4o']) .default('gpt-3.5-turbo').describe('OpenAI model to use'), maxTokens: z.number().min(1).max(4096).default(256) .describe('Maximum number of tokens to generate'), temperature: z.number().min(0).max(2).default(0.7) .describe('Temperature for text generation (0-2)'), }, async ({ prompt, model, maxTokens, temperature }) => { try { const generatedText = await openaiClient.generateText( prompt, model, maxTokens, temperature ); return { content: [ { type: 'text', text: generatedText } ] }; } catch (error) { return { content: [ { type: 'text', text: `Error generating text: ${error.message}` } ], isError: true }; } } );
  • Supporting utility method in OpenAIClient class that performs the actual HTTP POST request to OpenAI's chat/completions API, extracts the generated text, and handles errors.
    async generateText(prompt, model = 'gpt-3.5-turbo', maxTokens = 100, temperature = 0.7) { try { const response = await this.client.post('/chat/completions', { model, messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: prompt } ], max_tokens: maxTokens, temperature, }); return response.data.choices[0].message.content.trim(); } catch (error) { console.error(`OpenAI API error: ${error.message}`); if (error.response) { console.error(`Status: ${error.response.status}`); console.error(`Data: ${JSON.stringify(error.response.data)}`); } throw error; } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aydinfer/spline-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server