respondText
Generate text responses from prompts using configurable models and parameters. Override defaults via user settings.
Instructions
Respond with text to a prompt using the Pollinations Text API. User-configured settings in MCP config will be used as defaults unless specifically overridden.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The text prompt to generate a response for | |
| model | No | Model to use for text generation (default: user config or "openai"). Use listTextModels to see all available models | |
| seed | No | Seed for reproducible results (default: random) | |
| temperature | No | Controls randomness in the output (0.0 to 2.0, default: user config or model default) | |
| top_p | No | Controls diversity via nucleus sampling (0.0 to 1.0, default: user config or model default) | |
| system | No | System prompt to guide the model's behavior (default: user config or none) |
Implementation Reference
- src/services/textService.js:23-78 (handler)The actual implementation of the respondText tool. Makes an HTTP request to the Pollinations Text API with prompt, model, seed, temperature, top_p, system, and authConfig parameters. Always adds 'private=true'. Returns the generated text response.
export async function respondText(prompt, model = "openai", seed = Math.floor(Math.random() * 1000000), temperature = null, top_p = null, system = null, authConfig = null) { if (!prompt || typeof prompt !== 'string') { throw new Error('Prompt is required and must be a string'); } // Build the query parameters const queryParams = new URLSearchParams(); if (model) queryParams.append('model', model); if (seed !== undefined) queryParams.append('seed', seed); if (temperature !== null) queryParams.append('temperature', temperature); if (top_p !== null) queryParams.append('top_p', top_p); if (system) queryParams.append('system', system); // Always set private to true queryParams.append('private', 'true'); // Construct the URL const encodedPrompt = encodeURIComponent(prompt); const baseUrl = 'https://text.pollinations.ai'; let url = `${baseUrl}/${encodedPrompt}`; // Add query parameters if they exist const queryString = queryParams.toString(); if (queryString) { url += `?${queryString}`; } try { // Prepare fetch options with optional auth headers const fetchOptions = {}; if (authConfig) { fetchOptions.headers = {}; if (authConfig.token) { fetchOptions.headers['Authorization'] = `Bearer ${authConfig.token}`; } if (authConfig.referrer) { fetchOptions.headers['Referer'] = authConfig.referrer; } } // Fetch the text from the URL const response = await fetch(url, fetchOptions); if (!response.ok) { throw new Error(`Failed to generate text: ${response.statusText}`); } // Get the text response const textResponse = await response.text(); return textResponse; } catch (error) { log('Error generating text:', error); throw error; } } - pollinations-mcp-server.js:345-361 (registration)MCP server handler that extracts arguments (with defaults from config or env vars), calls respondText(), and returns the result as text content.
} else if (name === 'respondText') { try { const { prompt, model = defaultConfig.text.model, seed, temperature = defaultConfig.text.temperature ? Number(defaultConfig.text.temperature) : undefined, top_p = defaultConfig.text.top_p ? Number(defaultConfig.text.top_p) : undefined, system = defaultConfig.text.system } = args; const result = await respondText(prompt, model, seed, temperature, top_p, system, finalAuthConfig); return { content: [ { type: 'text', text: result } ] }; } catch (error) { return { content: [ { type: 'text', text: `Error generating text response: ${error.message}` } ], isError: true }; } - src/services/textSchema.js:8-41 (schema)JSON Schema definition for the respondText tool, defining input properties: prompt (required string), model, seed, temperature, top_p, and system.
export const respondTextSchema = { name: 'respondText', description: 'Respond with text to a prompt using the Pollinations Text API. User-configured settings in MCP config will be used as defaults unless specifically overridden.', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The text prompt to generate a response for' }, model: { type: 'string', description: 'Model to use for text generation (default: user config or "openai"). Use listTextModels to see all available models' }, seed: { type: 'number', description: 'Seed for reproducible results (default: random)' }, temperature: { type: 'number', description: 'Controls randomness in the output (0.0 to 2.0, default: user config or model default)' }, top_p: { type: 'number', description: 'Controls diversity via nucleus sampling (0.0 to 1.0, default: user config or model default)' }, system: { type: 'string', description: 'System prompt to guide the model\'s behavior (default: user config or none)' } }, required: ['prompt'] } }; - src/schemas.js:32-44 (registration)Central registration of respondTextSchema in the getAllToolSchemas() array, enabling the MCP server to advertise the tool.
export function getAllToolSchemas() { return [ generateImageUrlSchema, generateImageSchema, editImageSchema, generateImageFromReferenceSchema, listImageModelsSchema, respondAudioSchema, listAudioVoicesSchema, respondTextSchema, listTextModelsSchema ]; } - src/index.js:26-29 (helper)Re-exports respondText from textService.js as part of the public API.
// Text services respondText, listTextModels, };