respondText
Generate text responses to prompts using AI models through the Pollinations Text API. Configure settings like model selection, temperature, and system prompts for customized output.
Instructions
Respond with text to a prompt using the Pollinations Text API. User-configured settings in MCP config will be used as defaults unless specifically overridden.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The text prompt to generate a response for | |
| model | No | Model to use for text generation (default: user config or "openai"). Use listTextModels to see all available models | |
| seed | No | Seed for reproducible results (default: random) | |
| temperature | No | Controls randomness in the output (0.0 to 2.0, default: user config or model default) | |
| top_p | No | Controls diversity via nucleus sampling (0.0 to 1.0, default: user config or model default) | |
| system | No | System prompt to guide the model's behavior (default: user config or none) |
Implementation Reference
- src/services/textService.js:23-78 (handler)Core handler function implementing the respondText tool by fetching generated text from Pollinations text API with configurable parameters.export async function respondText(prompt, model = "openai", seed = Math.floor(Math.random() * 1000000), temperature = null, top_p = null, system = null, authConfig = null) { if (!prompt || typeof prompt !== 'string') { throw new Error('Prompt is required and must be a string'); } // Build the query parameters const queryParams = new URLSearchParams(); if (model) queryParams.append('model', model); if (seed !== undefined) queryParams.append('seed', seed); if (temperature !== null) queryParams.append('temperature', temperature); if (top_p !== null) queryParams.append('top_p', top_p); if (system) queryParams.append('system', system); // Always set private to true queryParams.append('private', 'true'); // Construct the URL const encodedPrompt = encodeURIComponent(prompt); const baseUrl = 'https://text.pollinations.ai'; let url = `${baseUrl}/${encodedPrompt}`; // Add query parameters if they exist const queryString = queryParams.toString(); if (queryString) { url += `?${queryString}`; } try { // Prepare fetch options with optional auth headers const fetchOptions = {}; if (authConfig) { fetchOptions.headers = {}; if (authConfig.token) { fetchOptions.headers['Authorization'] = `Bearer ${authConfig.token}`; } if (authConfig.referrer) { fetchOptions.headers['Referer'] = authConfig.referrer; } } // Fetch the text from the URL const response = await fetch(url, fetchOptions); if (!response.ok) { throw new Error(`Failed to generate text: ${response.statusText}`); } // Get the text response const textResponse = await response.text(); return textResponse; } catch (error) { log('Error generating text:', error); throw error; } }
- src/services/textSchema.js:8-41 (schema)Input schema and metadata definition for the respondText tool used in MCP tool registration.export const respondTextSchema = { name: 'respondText', description: 'Respond with text to a prompt using the Pollinations Text API. User-configured settings in MCP config will be used as defaults unless specifically overridden.', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The text prompt to generate a response for' }, model: { type: 'string', description: 'Model to use for text generation (default: user config or "openai"). Use listTextModels to see all available models' }, seed: { type: 'number', description: 'Seed for reproducible results (default: random)' }, temperature: { type: 'number', description: 'Controls randomness in the output (0.0 to 2.0, default: user config or model default)' }, top_p: { type: 'number', description: 'Controls diversity via nucleus sampling (0.0 to 1.0, default: user config or model default)' }, system: { type: 'string', description: 'System prompt to guide the model\'s behavior (default: user config or none)' } }, required: ['prompt'] } };
- pollinations-mcp-server.js:198-200 (registration)MCP server request handler for ListToolsRequestSchema that returns all tool schemas including respondText via getAllToolSchemas().server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: getAllToolSchemas() }));
- pollinations-mcp-server.js:345-361 (registration)Dispatch logic in MCP server's CallToolRequestSchema handler that routes respondText calls to the tool implementation with defaults.} else if (name === 'respondText') { try { const { prompt, model = defaultConfig.text.model, seed, temperature = defaultConfig.text.temperature ? Number(defaultConfig.text.temperature) : undefined, top_p = defaultConfig.text.top_p ? Number(defaultConfig.text.top_p) : undefined, system = defaultConfig.text.system } = args; const result = await respondText(prompt, model, seed, temperature, top_p, system, finalAuthConfig); return { content: [ { type: 'text', text: result } ] }; } catch (error) { return { content: [ { type: 'text', text: `Error generating text response: ${error.message}` } ], isError: true }; }
- src/schemas.js:32-44 (registration)Central function that aggregates and returns all tool schemas for registration, including respondTextSchema.export function getAllToolSchemas() { return [ generateImageUrlSchema, generateImageSchema, editImageSchema, generateImageFromReferenceSchema, listImageModelsSchema, respondAudioSchema, listAudioVoicesSchema, respondTextSchema, listTextModelsSchema ]; }