Skip to main content
Glama

getSecondOpinion

Leverage diverse LLM providers to generate responses tailored to your prompts. Select providers, configure models, and adjust parameters for dynamic AI-driven insights on the MindBridge MCP Server.

Instructions

Get responses from various LLM providers

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
frequency_penaltyNo
maxTokensNo
modelYes
presence_penaltyNo
promptYes
providerYes
reasoning_effortNo
stop_sequencesNo
streamNo
systemPromptNo
temperatureNo
top_kNo
top_pNo

Implementation Reference

  • The main execution handler for the getSecondOpinion tool. Validates provider and model, fetches the provider instance, calls its getResponse method with input params, and returns formatted LLM response or error.
    async (params) => { try { // Validate provider exists const providerName = params.provider.toLowerCase(); if (!this.providerFactory.hasProvider(providerName)) { const availableProviders = this.providerFactory.getAvailableProviders(); throw new Error( `Provider "${params.provider}" not configured. Available providers: ${availableProviders.join(', ')}` ); } const provider = this.providerFactory.getProvider(providerName)!; // Validate model exists for provider if (!provider.isValidModel(params.model)) { const availableModels = provider.getAvailableModels(); throw new Error( `Model "${params.model}" not found for provider "${params.provider}". Available models: ${availableModels.join(', ')}` ); } // Check reasoning effort compatibility if (params.reasoning_effort && !provider.supportsReasoningEffort()) { console.warn( `Warning: Provider "${params.provider}" does not support reasoning_effort parameter. It will be ignored.` ); } // Get response from provider const result = await provider.getResponse(params); if (result.isError) { return { content: [{ type: 'text', text: `Error: ${result.content[0].text}` }], isError: true }; } return { content: result.content }; } catch (error) { return { content: [{ type: 'text', text: `Error: ${error instanceof Error ? error.message : 'An unknown error occurred'}` }], isError: true }; } }
  • src/server.ts:29-80 (registration)
    Registration of the getSecondOpinion tool in the MCP server using this.tool(), providing name, description, Zod schema, and inline handler function.
    this.tool('getSecondOpinion', 'Get responses from various LLM providers', GetSecondOpinionSchema.shape, async (params) => { try { // Validate provider exists const providerName = params.provider.toLowerCase(); if (!this.providerFactory.hasProvider(providerName)) { const availableProviders = this.providerFactory.getAvailableProviders(); throw new Error( `Provider "${params.provider}" not configured. Available providers: ${availableProviders.join(', ')}` ); } const provider = this.providerFactory.getProvider(providerName)!; // Validate model exists for provider if (!provider.isValidModel(params.model)) { const availableModels = provider.getAvailableModels(); throw new Error( `Model "${params.model}" not found for provider "${params.provider}". Available models: ${availableModels.join(', ')}` ); } // Check reasoning effort compatibility if (params.reasoning_effort && !provider.supportsReasoningEffort()) { console.warn( `Warning: Provider "${params.provider}" does not support reasoning_effort parameter. It will be ignored.` ); } // Get response from provider const result = await provider.getResponse(params); if (result.isError) { return { content: [{ type: 'text', text: `Error: ${result.content[0].text}` }], isError: true }; } return { content: result.content }; } catch (error) { return { content: [{ type: 'text', text: `Error: ${error instanceof Error ? error.message : 'An unknown error occurred'}` }], isError: true }; } } );
  • Zod schema defining the input structure for the getSecondOpinion tool, including required fields like prompt, provider, model, and optional LLM-specific parameters.
    export const GetSecondOpinionSchema = z.object({ prompt: z.string().min(1), provider: LLMProvider, model: z.string().min(1), systemPrompt: z.string().optional().nullable(), temperature: z.number().min(0).max(1).optional(), maxTokens: z.number().positive().optional().default(1024), reasoning_effort: z.union([ // Primarily for OpenAI o-series z.literal('low'), z.literal('medium'), z.literal('high') ]).optional().nullable(), // Add other potential parameters if needed based on updated APIs top_p: z.number().min(0).max(1).optional(), top_k: z.number().positive().optional(), stop_sequences: z.array(z.string()).optional(), stream: z.boolean().optional(), // For Google Gemini frequency_penalty: z.number().min(-2.0).max(2.0).optional(), // For OpenAI presence_penalty: z.number().min(-2.0).max(2.0).optional() // For OpenAI });

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pinkpixel-dev/mindbridge-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server