Skip to main content
Glama

switch_model

Switch the active Whisper model for the current session without restarting Claude Desktop. Accepts a model filename for an already installed model.

Instructions

Switch the active Whisper model for the current session without restarting Claude Desktop. Accepts a model filename (e.g. ggml-large-v3-turbo.bin) or full path. The model must already be installed in your models directory. Use list_models to see installed models, download_model to add new ones. Change is session-scoped — does not persist after Claude Desktop restarts.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
model_nameYesModel filename (e.g. ggml-large-v3-turbo.bin) or full path. Must be a .bin file in the configured models directory.

Implementation Reference

  • src/index.ts:1108-1127 (registration)
    Tool registration in ListToolsRequestSchema handler (lines 1108-1125). Defines switch_model's name, description, and input schema.
          name: "switch_model",
          description:
            "Switch the active Whisper model for the current session without restarting Claude Desktop. " +
            "Accepts a model filename (e.g. ggml-large-v3-turbo.bin) or full path. " +
            "The model must already be installed in your models directory. " +
            "Use list_models to see installed models, download_model to add new ones. " +
            "Change is session-scoped — does not persist after Claude Desktop restarts.",
          inputSchema: {
            type: "object",
            properties: {
              model_name: {
                type: "string",
                description: "Model filename (e.g. ggml-large-v3-turbo.bin) or full path. Must be a .bin file in the configured models directory.",
              },
            },
            required: ["model_name"],
          },
        },
      ],
    }));
  • Input schema for switch_model. Requires a single string 'model_name' (model filename or full path, must be .bin).
      inputSchema: {
        type: "object",
        properties: {
          model_name: {
            type: "string",
            description: "Model filename (e.g. ggml-large-v3-turbo.bin) or full path. Must be a .bin file in the configured models directory.",
          },
        },
        required: ["model_name"],
      },
    },
  • Handler for switch_model tool (lines 1478-1549). Validates input (must be .bin, no path traversal), resolves path relative to models directory, enforces security constraint (must be within models dir), checks file exists, prevents switching mid-transcription, then updates the mutable WHISPER_MODEL variable. Returns previous/active model info.
    if (name === "switch_model") {
      const modelInput = (args?.model_name as string)?.trim();
      if (!modelInput) return { content: [{ type: "text", text: "model_name is required." }], isError: true };
    
      // Security: must end in .bin
      if (!modelInput.endsWith(".bin")) {
        return {
          content: [{ type: "text", text: `Invalid model: "${modelInput}"\nModel files must end in .bin` }],
          isError: true,
        };
      }
    
      // Security: reject path traversal
      if (UNSAFE_PATH_RE.test(modelInput)) {
        return {
          content: [{ type: "text", text: `Invalid path: "${modelInput}"\nPaths containing ".." or UNC paths are not allowed.` }],
          isError: true,
        };
      }
    
      // Resolve to full path — either absolute or relative to models dir
      const modelsDir = dirname(WHISPER_MODEL);
      const resolvedPath = modelInput.includes("\\") || modelInput.includes("/")
        ? modelInput
        : join(modelsDir, modelInput);
    
      // Security: must live within the configured models directory
      if (!resolvedPath.startsWith(modelsDir)) {
        return {
          content: [{ type: "text", text: `Security error: model must be within the configured models directory (${modelsDir}).` }],
          isError: true,
        };
      }
    
      if (!existsSync(resolvedPath)) {
        return {
          content: [{
            type: "text",
            text:
              `Model not found: ${resolvedPath}\n\n` +
              `Use list_models to see installed models, or download_model to install a new one.`,
          }],
          isError: true,
        };
      }
    
      // Process lock — don't switch mid-transcription
      if (await isWhisperRunning()) {
        return {
          content: [{ type: "text", text: "Cannot switch model while a transcription is in progress. Wait for the current job to finish first." }],
          isError: true,
        };
      }
    
      const previousModel = basename(WHISPER_MODEL);
      WHISPER_MODEL = resolvedPath;
      const newModel = basename(WHISPER_MODEL);
      const sizeMb = (statSync(resolvedPath).size / (1024 * 1024)).toFixed(0);
      const known = MODEL_REGISTRY.find(m => m.filename === newModel);
    
      return {
        content: [{
          type: "text",
          text:
            `✅ Model switched!\n\n` +
            `Previous: ${previousModel}\n` +
            `Active:   ${newModel} (${sizeMb} MB)\n` +
            (known ? `Use case: ${known.useCase}\n` : "") +
            `\nThis change is session-scoped. To make it permanent, update WHISPER_MODEL in claude_desktop_config.json.`,
        }],
      };
    }
  • Mutable module-level variable WHISPER_MODEL that switch_model updates at runtime (line 32-35). Initialized from environment variable or default path.
    let WHISPER_MODEL =
      process.env.WHISPER_MODEL ?? "C:\\whisper\\models\\ggml-base.en.bin";
    const FFMPEG_PATH =
      process.env.FFMPEG_PATH ?? "ffmpeg";
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses session-scoped, non-persistent behavior. Does not mention potential side effects on ongoing operations or error handling for missing models, but the core behavioral trait is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: purpose, parameter details, usage guidelines and scope. No redundant information, front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given single parameter, no output schema, and no annotations, the description fully covers purpose, prerequisites, parameter semantics, and behavioral context (session scope). No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds meaning by specifying format ('filename or full path') and constraint ('Must be a .bin file in the configured models directory'), enhancing agent's understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it 'Switch the active Whisper model for the current session', specifying both the verb and resource. Distinguishes from sibling tools like list_models (viewing) and download_model (adding) by context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('without restarting Claude Desktop'), mentions prerequisites ('model must already be installed'), and references alternatives ('Use list_models to see installed models, download_model to add new ones'). Also clarifies session scope.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/eviscerations/whisper-windows-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server