getSecondOpinion
Leverage diverse LLM providers to generate responses tailored to your prompts. Select providers, configure models, and adjust parameters for dynamic AI-driven insights on the MindBridge MCP Server.
Instructions
Get responses from various LLM providers
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| frequency_penalty | No | ||
| maxTokens | No | ||
| model | Yes | ||
| presence_penalty | No | ||
| prompt | Yes | ||
| provider | Yes | ||
| reasoning_effort | No | ||
| stop_sequences | No | ||
| stream | No | ||
| systemPrompt | No | ||
| temperature | No | ||
| top_k | No | ||
| top_p | No |
Implementation Reference
- src/server.ts:32-79 (handler)The main execution handler for the getSecondOpinion tool. Validates provider and model, fetches the provider instance, calls its getResponse method with input params, and returns formatted LLM response or error.async (params) => { try { // Validate provider exists const providerName = params.provider.toLowerCase(); if (!this.providerFactory.hasProvider(providerName)) { const availableProviders = this.providerFactory.getAvailableProviders(); throw new Error( `Provider "${params.provider}" not configured. Available providers: ${availableProviders.join(', ')}` ); } const provider = this.providerFactory.getProvider(providerName)!; // Validate model exists for provider if (!provider.isValidModel(params.model)) { const availableModels = provider.getAvailableModels(); throw new Error( `Model "${params.model}" not found for provider "${params.provider}". Available models: ${availableModels.join(', ')}` ); } // Check reasoning effort compatibility if (params.reasoning_effort && !provider.supportsReasoningEffort()) { console.warn( `Warning: Provider "${params.provider}" does not support reasoning_effort parameter. It will be ignored.` ); } // Get response from provider const result = await provider.getResponse(params); if (result.isError) { return { content: [{ type: 'text', text: `Error: ${result.content[0].text}` }], isError: true }; } return { content: result.content }; } catch (error) { return { content: [{ type: 'text', text: `Error: ${error instanceof Error ? error.message : 'An unknown error occurred'}` }], isError: true }; } }
- src/server.ts:29-80 (registration)Registration of the getSecondOpinion tool in the MCP server using this.tool(), providing name, description, Zod schema, and inline handler function.this.tool('getSecondOpinion', 'Get responses from various LLM providers', GetSecondOpinionSchema.shape, async (params) => { try { // Validate provider exists const providerName = params.provider.toLowerCase(); if (!this.providerFactory.hasProvider(providerName)) { const availableProviders = this.providerFactory.getAvailableProviders(); throw new Error( `Provider "${params.provider}" not configured. Available providers: ${availableProviders.join(', ')}` ); } const provider = this.providerFactory.getProvider(providerName)!; // Validate model exists for provider if (!provider.isValidModel(params.model)) { const availableModels = provider.getAvailableModels(); throw new Error( `Model "${params.model}" not found for provider "${params.provider}". Available models: ${availableModels.join(', ')}` ); } // Check reasoning effort compatibility if (params.reasoning_effort && !provider.supportsReasoningEffort()) { console.warn( `Warning: Provider "${params.provider}" does not support reasoning_effort parameter. It will be ignored.` ); } // Get response from provider const result = await provider.getResponse(params); if (result.isError) { return { content: [{ type: 'text', text: `Error: ${result.content[0].text}` }], isError: true }; } return { content: result.content }; } catch (error) { return { content: [{ type: 'text', text: `Error: ${error instanceof Error ? error.message : 'An unknown error occurred'}` }], isError: true }; } } );
- src/types.ts:16-35 (schema)Zod schema defining the input structure for the getSecondOpinion tool, including required fields like prompt, provider, model, and optional LLM-specific parameters.export const GetSecondOpinionSchema = z.object({ prompt: z.string().min(1), provider: LLMProvider, model: z.string().min(1), systemPrompt: z.string().optional().nullable(), temperature: z.number().min(0).max(1).optional(), maxTokens: z.number().positive().optional().default(1024), reasoning_effort: z.union([ // Primarily for OpenAI o-series z.literal('low'), z.literal('medium'), z.literal('high') ]).optional().nullable(), // Add other potential parameters if needed based on updated APIs top_p: z.number().min(0).max(1).optional(), top_k: z.number().positive().optional(), stop_sequences: z.array(z.string()).optional(), stream: z.boolean().optional(), // For Google Gemini frequency_penalty: z.number().min(-2.0).max(2.0).optional(), // For OpenAI presence_penalty: z.number().min(-2.0).max(2.0).optional() // For OpenAI });