Skip to main content
Glama

respondText

Generate AI-driven text responses to prompts using customizable settings like model, temperature, and system instructions for tailored output.

Instructions

Respond with text to a prompt using the Pollinations Text API. User-configured settings in MCP config will be used as defaults unless specifically overridden.

Input Schema

NameRequiredDescriptionDefault
modelNoModel to use for text generation (default: user config or "openai"). Use listTextModels to see all available models
promptYesThe text prompt to generate a response for
seedNoSeed for reproducible results (default: random)
systemNoSystem prompt to guide the model's behavior (default: user config or none)
temperatureNoControls randomness in the output (0.0 to 2.0, default: user config or model default)
top_pNoControls diversity via nucleus sampling (0.0 to 1.0, default: user config or model default)

Input Schema (JSON Schema)

{ "properties": { "model": { "description": "Model to use for text generation (default: user config or \"openai\"). Use listTextModels to see all available models", "type": "string" }, "prompt": { "description": "The text prompt to generate a response for", "type": "string" }, "seed": { "description": "Seed for reproducible results (default: random)", "type": "number" }, "system": { "description": "System prompt to guide the model's behavior (default: user config or none)", "type": "string" }, "temperature": { "description": "Controls randomness in the output (0.0 to 2.0, default: user config or model default)", "type": "number" }, "top_p": { "description": "Controls diversity via nucleus sampling (0.0 to 1.0, default: user config or model default)", "type": "number" } }, "required": [ "prompt" ], "type": "object" }

Implementation Reference

  • Core handler function that implements the respondText tool logic by calling the Pollinations Text API with the provided parameters.
    export async function respondText(prompt, model = "openai", seed = Math.floor(Math.random() * 1000000), temperature = null, top_p = null, system = null, authConfig = null) { if (!prompt || typeof prompt !== 'string') { throw new Error('Prompt is required and must be a string'); } // Build the query parameters const queryParams = new URLSearchParams(); if (model) queryParams.append('model', model); if (seed !== undefined) queryParams.append('seed', seed); if (temperature !== null) queryParams.append('temperature', temperature); if (top_p !== null) queryParams.append('top_p', top_p); if (system) queryParams.append('system', system); // Always set private to true queryParams.append('private', 'true'); // Construct the URL const encodedPrompt = encodeURIComponent(prompt); const baseUrl = 'https://text.pollinations.ai'; let url = `${baseUrl}/${encodedPrompt}`; // Add query parameters if they exist const queryString = queryParams.toString(); if (queryString) { url += `?${queryString}`; } try { // Prepare fetch options with optional auth headers const fetchOptions = {}; if (authConfig) { fetchOptions.headers = {}; if (authConfig.token) { fetchOptions.headers['Authorization'] = `Bearer ${authConfig.token}`; } if (authConfig.referrer) { fetchOptions.headers['Referer'] = authConfig.referrer; } } // Fetch the text from the URL const response = await fetch(url, fetchOptions); if (!response.ok) { throw new Error(`Failed to generate text: ${response.statusText}`); } // Get the text response const textResponse = await response.text(); return textResponse; } catch (error) { log('Error generating text:', error); throw error; } }
  • Schema definition for the respondText tool, including input parameters and descriptions.
    export const respondTextSchema = { name: 'respondText', description: 'Respond with text to a prompt using the Pollinations Text API. User-configured settings in MCP config will be used as defaults unless specifically overridden.', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The text prompt to generate a response for' }, model: { type: 'string', description: 'Model to use for text generation (default: user config or "openai"). Use listTextModels to see all available models' }, seed: { type: 'number', description: 'Seed for reproducible results (default: random)' }, temperature: { type: 'number', description: 'Controls randomness in the output (0.0 to 2.0, default: user config or model default)' }, top_p: { type: 'number', description: 'Controls diversity via nucleus sampling (0.0 to 1.0, default: user config or model default)' }, system: { type: 'string', description: 'System prompt to guide the model\'s behavior (default: user config or none)' } }, required: ['prompt'] } };
  • MCP server request handler block that processes respondText tool calls, applies defaults, invokes the core respondText function, and formats the response.
    } else if (name === 'respondText') { try { const { prompt, model = defaultConfig.text.model, seed, temperature = defaultConfig.text.temperature ? Number(defaultConfig.text.temperature) : undefined, top_p = defaultConfig.text.top_p ? Number(defaultConfig.text.top_p) : undefined, system = defaultConfig.text.system } = args; const result = await respondText(prompt, model, seed, temperature, top_p, system, finalAuthConfig); return { content: [ { type: 'text', text: result } ] }; } catch (error) { return { content: [ { type: 'text', text: `Error generating text response: ${error.message}` } ], isError: true }; }
  • Registration of the list tools handler which includes the respondText schema via getAllToolSchemas().
    server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: getAllToolSchemas() }));
  • Import of the respondText function into the MCP server for use in tool handling.
    generateImageUrl, generateImage, editImage, generateImageFromReference, respondAudio, listImageModels, listTextModels, listAudioVoices, respondText, } from './src/index.js';

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pinkpixel-dev/MCPollinations'

If you have feedback or need assistance with the MCP directory API, please join our Discord server