Skip to main content
Glama

configureOpenAI

Configure OpenAI integration in Spline 3D scenes to generate AI responses, map variables, and automate interactions when scenes load.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sceneIdYesScene ID
modelNoOpenAI model to usegpt-3.5-turbo
apiKeyNoOpenAI API key (uses env var if not provided)
promptYesSystem prompt/behavior for the AI
requestOnStartNoWhether to call OpenAI when scene loads
variableMappingsNoMappings from OpenAI response to Spline variables

Implementation Reference

  • Direct registration of the 'configureOpenAI' MCP tool, including inline Zod input schema and async handler function that configures OpenAI integration for a Spline scene via API call.
    server.tool( 'configureOpenAI', { sceneId: z.string().min(1).describe('Scene ID'), model: z.enum(['gpt-3.5-turbo', 'gpt-4-turbo', 'gpt-4o-mini', 'gpt-4o']) .default('gpt-3.5-turbo').describe('OpenAI model to use'), apiKey: z.string().optional().describe('OpenAI API key (uses env var if not provided)'), prompt: z.string().min(1).describe('System prompt/behavior for the AI'), requestOnStart: z.boolean().optional().default(false) .describe('Whether to call OpenAI when scene loads'), variableMappings: z.array(z.object({ responseField: z.string().describe('Field from API response'), variableName: z.string().describe('Spline variable name'), })).optional().describe('Mappings from OpenAI response to Spline variables'), }, async ({ sceneId, model, apiKey, prompt, requestOnStart, variableMappings }) => { try { const openaiConfig = { model, apiKey: apiKey || process.env.OPENAI_API_KEY, prompt, requestOnStart: requestOnStart || false, ...(variableMappings && { variableMappings }), }; const result = await apiClient.request('POST', `/scenes/${sceneId}/openai`, openaiConfig); return { content: [ { type: 'text', text: `OpenAI integration configured successfully with ID: ${result.id}` } ] }; } catch (error) { return { content: [ { type: 'text', text: `Error configuring OpenAI: ${error.message}` } ], isError: true }; } } );
  • The core handler function implementing the 'configureOpenAI' tool logic: constructs config from params, calls Spline API to configure OpenAI integration, returns success/error response.
    async ({ sceneId, model, apiKey, prompt, requestOnStart, variableMappings }) => { try { const openaiConfig = { model, apiKey: apiKey || process.env.OPENAI_API_KEY, prompt, requestOnStart: requestOnStart || false, ...(variableMappings && { variableMappings }), }; const result = await apiClient.request('POST', `/scenes/${sceneId}/openai`, openaiConfig); return { content: [ { type: 'text', text: `OpenAI integration configured successfully with ID: ${result.id}` } ] }; } catch (error) { return { content: [ { type: 'text', text: `Error configuring OpenAI: ${error.message}` } ], isError: true }; } } );
  • Zod input schema defining parameters for the 'configureOpenAI' tool: sceneId, model, apiKey, prompt, requestOnStart, variableMappings.
    { sceneId: z.string().min(1).describe('Scene ID'), model: z.enum(['gpt-3.5-turbo', 'gpt-4-turbo', 'gpt-4o-mini', 'gpt-4o']) .default('gpt-3.5-turbo').describe('OpenAI model to use'), apiKey: z.string().optional().describe('OpenAI API key (uses env var if not provided)'), prompt: z.string().min(1).describe('System prompt/behavior for the AI'), requestOnStart: z.boolean().optional().default(false) .describe('Whether to call OpenAI when scene loads'), variableMappings: z.array(z.object({ responseField: z.string().describe('Field from API response'), variableName: z.string().describe('Spline variable name'), })).optional().describe('Mappings from OpenAI response to Spline variables'), },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aydinfer/spline-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server