ask_llm
Query WolframAlpha to receive structured, LLM-optimized responses in multiple formats for natural language questions.
Instructions
Ask WolframAlpha a query and get LLM-optimized structured response with multiple formats
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | The query to ask WolframAlpha |
Implementation Reference
- src/tools/index.ts:23-71 (handler)The main handler function for the 'ask_llm' tool. It invokes the WolframLLMService.query method, processes the response by trimming assumption sections and formatting the output text with query, interpretation, result, and URL.handler: async (args: QueryArgs): Promise<ToolResponse> => { const response = await wolframLLMService.query(args.query); if (!response.success || !response.result) { throw new Error(response.error || 'Failed to get LLM response from WolframAlpha'); } // Get the raw result text let rawText = response.result.result; // Find the second "Assumption:" section and remove everything after it const lines = rawText.split('\n\n'); const firstAssumptionIndex = lines.findIndex(line => line.startsWith('Assumption:')); let processedLines = lines; if (firstAssumptionIndex >= 0) { const secondAssumptionIndex = lines.findIndex((line, i) => i > firstAssumptionIndex && line.startsWith('Assumption:') ); if (secondAssumptionIndex > 0) { // Keep only the content up to the second assumption processedLines = lines.slice(0, secondAssumptionIndex); // Check if there's a URL section and add it back if needed const urlLine = lines.find(line => line.startsWith('Wolfram|Alpha website result')); if (urlLine && !processedLines.includes(urlLine)) { processedLines.push(urlLine); } } } // Reconstruct the text let text = `Query: ${response.result.query}\n`; if (response.result.interpretation) { text += `Interpretation: ${response.result.interpretation}\n`; } text += `\nResult: ${processedLines.join('\n\n')}\n`; if (response.result.url) { text += `\nFull results: ${response.result.url}`; } return { content: [{ type: "text", text }] }; }
- src/tools/index.ts:13-22 (schema)Input schema defining the 'query' parameter as a required string for the 'ask_llm' tool.inputSchema: { type: "object", properties: { query: { type: "string", description: "The query to ask WolframAlpha" } }, required: ["query"] },
- src/index.ts:25-29 (registration)MCP server capabilities registration declaring 'ask_llm' as an available tool.tools: { ask_llm: true, get_simple_answer: true, validate_key: true },
- src/services/wolfram-llm.ts:203-259 (helper)Core helper method WolframLLMService.query() that performs the HTTP request to WolframAlpha LLM API and basic parsing, called by the tool handler.async query(input: string): Promise<LLMQueryResult> { try { // Build query URL with parameters const params = new URLSearchParams({ appid: this.config.appId, input }); // Make request to LLM API const response = await axios.get(`${this.baseUrl}?${params.toString()}`); // Store raw response for error reporting const rawResponse = response.data; // Log raw response for debugging console.log('Raw API Response:', JSON.stringify(rawResponse, null, 2)); if (typeof rawResponse !== 'string') { console.error('Unexpected response format:', rawResponse); return { success: false, error: 'Invalid response format from WolframAlpha API', rawResponse }; } const result = this.parseQueryResponse(rawResponse); return { success: true, result }; } catch (error) { console.error('WolframAlpha LLM API Error:', error); // Get raw response if available let rawResponse: unknown; if (axios.isAxiosError(error) && error.response?.data) { rawResponse = error.response.data; console.error('Raw API Response:', rawResponse); } if (axios.isAxiosError(error) && error.response?.status === 501) { return { success: false, error: 'Input cannot be interpreted. Try rephrasing your query.', rawResponse }; } return { success: false, error: error instanceof Error ? error.message : 'Failed to query WolframAlpha LLM API', rawResponse }; } }
- src/types/index.ts:11-13 (schema)TypeScript interface QueryArgs used by the 'ask_llm' handler for input validation.export interface QueryArgs { query: string; }