list_models
View all installed Whisper models with details on file size, activation status, quantization, and use case. Reads local filesystem only, no network required.
Instructions
List all Whisper model files installed in your models directory. Shows filename, size, whether it is currently active, quantization status, and recommended use case for each model. No network calls — reads local filesystem only.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/index.ts:1080-1088 (registration)The 'list_models' tool is registered in the ListToolsRequestSchema handler (line 886-1127) with its name, description, and empty inputSchema on lines 1081-1088.
{ name: "list_models", description: "List all Whisper model files installed in your models directory. " + "Shows filename, size, whether it is currently active, quantization status, " + "and recommended use case for each model. " + "No network calls — reads local filesystem only.", inputSchema: { type: "object", properties: {} }, }, - src/index.ts:1300-1356 (handler)The 'list_models' handler function, executed when the tool is called. It reads the models directory, lists .bin files, shows size and active status, cross-references with MODEL_REGISTRY for descriptions, and also lists available downloadable models.
// list_models // ------------------------------------------------------------------------- if (name === "list_models") { const modelsDir = dirname(WHISPER_MODEL); if (!existsSync(modelsDir)) { return { content: [{ type: "text", text: `Models directory not found: ${modelsDir}` }], isError: true }; } let files: string[]; try { files = readdirSync(modelsDir).filter(f => f.endsWith(".bin")); } catch (err: any) { return { content: [{ type: "text", text: `Could not read models directory: ${err?.message}` }], isError: true }; } if (files.length === 0) { return { content: [{ type: "text", text: `No .bin model files found in: ${modelsDir}\n\n` + `Use download_model to install a model.\n` + `Recommended starting point: large-v3-turbo (English GPU) or large-v3-turbo-q5_0 (CPU/multilingual)`, }], }; } const activeFile = basename(WHISPER_MODEL); const rows = files.map(f => { const fullPath = join(modelsDir, f); const sizeMb = (() => { try { return (statSync(fullPath).size / (1024 * 1024)).toFixed(0) + " MB"; } catch { return "?"; } })(); const isActive = f === activeFile ? " ◀ ACTIVE" : ""; const known = MODEL_REGISTRY.find(m => m.filename === f); const quantTag = known?.quantized ? " [quantized]" : ""; const useCase = known ? known.useCase : "Unknown model"; return `${isActive ? "●" : "○"} ${f}${isActive}${quantTag}\n Size: ${sizeMb} | ${useCase}`; }); // Also list downloadable models not yet installed const installedFilenames = new Set(files); const available = MODEL_REGISTRY .filter(m => !installedFilenames.has(m.filename)) .map(m => ` ${m.name} (${m.filename}, ~${m.sizeMb} MB) — ${m.useCase}`) .join("\n"); return { content: [{ type: "text", text: `Installed models in: ${modelsDir}\n${"─".repeat(60)}\n\n` + rows.join("\n\n") + (available ? `\n\n${"─".repeat(60)}\nAvailable to download:\n${available}\n\nUse download_model <name> to install.` : `\n\n${"─".repeat(60)}\nAll known models are installed.`), }], }; } - src/index.ts:549-579 (helper)The MODEL_REGISTRY constant that list_models uses to look up model metadata (name, filename, size, quantized status, use case) for enriching the output.
interface ModelEntry { name: string; filename: string; sizeMb: number; multilingual: boolean; quantized: boolean; useCase: string; url: string; } const MODEL_REGISTRY: ModelEntry[] = [ // Full-precision English { name: "tiny.en", filename: "ggml-tiny.en.bin", sizeMb: 75, multilingual: false, quantized: false, useCase: "Quick tests, lowest accuracy", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.en.bin" }, { name: "base.en", filename: "ggml-base.en.bin", sizeMb: 142, multilingual: false, quantized: false, useCase: "Fast English, good accuracy", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en.bin" }, { name: "small.en", filename: "ggml-small.en.bin", sizeMb: 466, multilingual: false, quantized: false, useCase: "Better English accuracy", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.en.bin" }, { name: "medium.en", filename: "ggml-medium.en.bin", sizeMb: 1500, multilingual: false, quantized: false, useCase: "High accuracy English, fast on GPU", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium.en.bin" }, // Full-precision multilingual { name: "tiny", filename: "ggml-tiny.bin", sizeMb: 75, multilingual: true, quantized: false, useCase: "Multilingual, minimal accuracy", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-tiny.bin" }, { name: "base", filename: "ggml-base.bin", sizeMb: 142, multilingual: true, quantized: false, useCase: "Multilingual, fast", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.bin" }, { name: "small", filename: "ggml-small.bin", sizeMb: 466, multilingual: true, quantized: false, useCase: "Multilingual, better accuracy", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.bin" }, { name: "medium", filename: "ggml-medium.bin", sizeMb: 1500, multilingual: true, quantized: false, useCase: "Multilingual, high accuracy", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium.bin" }, { name: "large-v3", filename: "ggml-large-v3.bin", sizeMb: 2900, multilingual: true, quantized: false, useCase: "Best accuracy, multilingual — requires 6GB+ VRAM", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3.bin" }, { name: "large-v3-turbo", filename: "ggml-large-v3-turbo.bin", sizeMb: 1600, multilingual: true, quantized: false, useCase: "~6x faster than large-v3, minimal accuracy loss — RECOMMENDED for English GPU batch work", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-turbo.bin" }, // Quantized variants — smaller, CPU-friendly { name: "base.en-q5_1", filename: "ggml-base.en-q5_1.bin", sizeMb: 57, multilingual: false, quantized: true, useCase: "Tiny English model, CPU-friendly", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-base.en-q5_1.bin" }, { name: "small.en-q5_1", filename: "ggml-small.en-q5_1.bin", sizeMb: 181, multilingual: false, quantized: true, useCase: "Fast English, low memory, good for CPU", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-small.en-q5_1.bin" }, { name: "medium.en-q5_0", filename: "ggml-medium.en-q5_0.bin", sizeMb: 514, multilingual: false, quantized: true, useCase: "High accuracy English, CPU-friendly — good default for no-GPU systems", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium.en-q5_0.bin" }, { name: "large-v3-q5_0", filename: "ggml-large-v3-q5_0.bin", sizeMb: 1080, multilingual: true, quantized: true, useCase: "Best multilingual quality at half the size", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-q5_0.bin" }, { name: "large-v3-turbo-q5_0", filename: "ggml-large-v3-turbo-q5_0.bin", sizeMb: 547, multilingual: true, quantized: true, useCase: "RECOMMENDED for CPU-only multilingual — fast, low memory, good accuracy", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-turbo-q5_0.bin" }, { name: "large-v3-turbo-q8_0", filename: "ggml-large-v3-turbo-q8_0.bin", sizeMb: 874, multilingual: true, quantized: true, useCase: "Turbo quality closer to full precision, moderate size", url: "https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-turbo-q8_0.bin" }, ]; - src/index.ts:886-1127 (registration)The ListToolsRequestSchema handler that registers all tool definitions including list_models.
server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [ { name: "transcribe_audio", description: "Transcribe a single audio or video file using whisper.cpp on Windows. " + "Natively supports mp3 and wav. Automatically converts mp4, mkv, avi, mov, " + "webm, m4a, flac, ogg etc. via FFmpeg — no manual conversion needed. " + "Can output plain text, timestamps, JSON, or SRT subtitle files. " + "For files that may take more than 4 minutes, set background=true to run as a detached job " + "and use check_progress to monitor it.", inputSchema: { type: "object", properties: { file_path: { type: "string", description: "Absolute Windows path, e.g. C:\\Users\\You\\Downloads\\recording.mp4" }, model: { type: "string", description: "Override model path. Leave blank to use active model." }, language: { type: "string", description: "Language code (e.g. en, ja, es, fr) or 'auto' to detect automatically. Defaults to en.", default: "en" }, output_format: { type: "string", enum: ["text", "timestamps", "json", "srt"], description: "text = plain (default), timestamps = with time codes, json = structured, srt = subtitle file saved next to source.", default: "text", }, threads: { type: "number", description: `CPU threads. Defaults to ${WHISPER_THREADS} of ${SYSTEM_THREADS}.` }, save_to_file: { type: "boolean", description: "Save transcript as .txt next to the source file.", default: false }, background: { type: "boolean", description: "Run as a detached background job. Returns a job ID immediately. Use check_progress to monitor. Recommended for files over 10 minutes.", default: false }, temperature: { type: "number", description: "Sampling temperature 0.0–1.0. Default 0.0 (deterministic). Higher values reduce hallucination on noisy audio at the cost of consistency." }, prompt: { type: "string", description: "Prior context string injected before transcription. Improves accuracy for domain-specific vocabulary, speaker names, or technical terms. Example: 'Names: Keemstar, DramaAlert.'" }, condition_on_prev_text: { type: "boolean", description: "Re-enable conditioning each segment on its own prior output (removes --max-context 0 flag). Default false (off). Only enable for highly structured audio where context continuity helps.", default: false }, no_speech_thold: { type: "number", description: "Confidence threshold below which segments are treated as silence rather than transcribed. Default 0.6.", default: 0.6 }, beam_size: { type: "number", description: "Beam search width. Higher = more accurate but slower. Default 5." }, best_of: { type: "number", description: "Number of candidate sequences to evaluate. Default 5." }, gpu_device: { type: "number", description: "GPU device index for multi-GPU systems. Use check_system to see available GPUs. Default 0." }, processors: { type: "number", description: "Number of parallel processors for chunk processing. Default 1." }, word_timestamps: { type: "boolean", description: "Output one word per timestamped segment (sets --max-len 1 --split-on-word). Useful for clip alignment and precise timecode search.", default: false }, max_segment_length: { type: "number", description: "Maximum segment length in characters. Controls line break frequency in output. Ignored when word_timestamps=true." }, split_on_word: { type: "boolean", description: "Split segments at word boundaries rather than mid-word. Defaults to false.", default: false }, diarize: { type: "boolean", description: "Stereo speaker diarization — labels left/right channel speakers in transcript. Requires stereo audio with speakers on separate channels.", default: false }, vad_model: { type: "string", description: "Absolute path to a Silero VAD model .bin file. When provided, voice activity detection strips silence before transcription — reduces hallucinations and speeds up processing. Download via download_model." }, offset_t: { type: "number", description: "Start transcription at this offset in milliseconds. Use to process a specific section of a file." }, duration: { type: "number", description: "Process only this many milliseconds of audio starting from offset_t (or the beginning). Use with offset_t to target a specific time window." }, }, required: ["file_path"], }, }, { name: "check_progress", description: "Check the status of a background transcription job started with transcribe_audio (background=true). " + "Returns current progress, elapsed time, last processed timestamp, and the transcript when complete. " + "Call this repeatedly until the job shows as complete or failed.", inputSchema: { type: "object", properties: { job_id: { type: "string", description: "Job ID returned by transcribe_audio when background=true." }, }, required: ["job_id"], }, }, { name: "transcribe_batch", description: "Transcribe multiple audio/video files in a folder interactively, one file at a time. " + "Shows a preview of each transcript and waits for confirmation before continuing. " + "Saves each transcript as a .txt file next to its source. " + "Files already transcribed (with matching .txt) are shown as done and skipped. " + "Supported formats: mp3, wav, mp4, mkv, avi, mov, webm, m4a, flac, ogg. " + "NOTE: For large unattended batch jobs, use whisper-cli.exe directly from the command line " + "— see TROUBLESHOOTING.md for the command syntax.", inputSchema: { type: "object", properties: { folder_path: { type: "string", description: "Absolute Windows path to the folder." }, file_index: { type: "number", description: "Which file to process (1-based). Omit to list files first.", }, language: { type: "string", description: "Language code. Defaults to en.", default: "en" }, threads: { type: "number", description: `CPU threads. Defaults to ${WHISPER_THREADS} of ${SYSTEM_THREADS}.` }, recursive: { type: "boolean", description: "Include subfolders. Defaults to false.", default: false }, }, required: ["folder_path"], }, }, { name: "generate_subtitles", description: "Generate subtitle files for an audio or video file using whisper.cpp. " + "Set language='auto' to detect the spoken language automatically. " + "Set translate_to_english=true to also generate an English translation subtitle file. " + "When both are requested, two .srt files are saved: one in the original language (e.g. film.ja.srt) " + "and one English translation (film.en.srt). " + "Load in VLC via Subtitle → Add Subtitle File. " + "Supports all standard formats plus .3gp and .ts.", inputSchema: { type: "object", properties: { file_path: { type: "string", description: "Absolute Windows path to the file." }, language: { type: "string", description: "Language code (e.g. ja, es, fr, de) or 'auto' to detect automatically. Defaults to en.", default: "en", }, translate_to_english: { type: "boolean", description: "Also generate an English translation .srt alongside the native language .srt. Only applies when language is not 'en'. Not available in background mode.", default: false, }, background: { type: "boolean", description: "Run as a detached background job — recommended for files over 10 minutes. Returns a job ID to use with check_progress. translate_to_english is not available in background mode.", default: false, }, threads: { type: "number", description: `CPU threads. Defaults to ${WHISPER_THREADS} of ${SYSTEM_THREADS}.` }, temperature: { type: "number", description: "Sampling temperature 0.0–1.0. Default 0.0." }, prompt: { type: "string", description: "Prior context string for domain-specific vocabulary or speaker names." }, beam_size: { type: "number", description: "Beam search width. Higher = more accurate, slower. Default 5." }, best_of: { type: "number", description: "Candidate sequences evaluated. Default 5." }, diarize: { type: "boolean", description: "Stereo speaker diarization. Requires stereo audio with speakers on separate channels.", default: false }, vad_model: { type: "string", description: "Path to Silero VAD model .bin. Strips silence before transcription. Download via download_model." }, }, required: ["file_path"], }, }, { name: "check_config", description: "Verify whisper-cli.exe, model, and FFmpeg are all available. Run this first if anything fails.", inputSchema: { type: "object", properties: {} }, }, { name: "start_batch", description: "Start an automated sequential batch transcription of all untranscribed files in a folder. " + "Scans for files without a matching .txt, sorts by duration (shortest first), " + "and processes them one at a time as background jobs. " + "Each file is validated after completion — empty or suspiciously short outputs are flagged. " + "Returns a batch ID to use with check_batch_progress.", inputSchema: { type: "object", properties: { folder_path: { type: "string", description: "Absolute Windows path to the folder." }, language: { type: "string", description: "Language code. Defaults to en.", default: "en" }, threads: { type: "number", description: `CPU threads. Defaults to ${WHISPER_THREADS} of ${SYSTEM_THREADS}.` }, }, required: ["folder_path"], }, }, { name: "check_batch_progress", description: "Check the status of a batch started with start_batch. " + "Automatically advances to the next file when the current one finishes. " + "Returns overall progress, current file, failed files, and elapsed time. " + "Call repeatedly until the batch shows as complete.", inputSchema: { type: "object", properties: { batch_id: { type: "string", description: "Batch ID returned by start_batch." }, }, required: ["batch_id"], }, }, { name: "analyze_media", description: "Analyze one or more media files using FFprobe before transcribing. " + "For a single file: returns duration, size, codec, and estimated transcription time on CPU and GPU. " + "For a folder: scans all supported media files and returns a sorted table with the same info for each. " + "Use this to plan batch work, estimate how long transcription will take, or check what's already been transcribed.", inputSchema: { type: "object", properties: { path: { type: "string", description: "Absolute Windows path to a single file or a folder.", }, sort_by: { type: "string", enum: ["duration", "name", "size"], description: "For folder scans: sort order. Defaults to duration (shortest first).", default: "duration", }, }, required: ["path"], }, }, { name: "check_system", description: "Detect GPU hardware and verify Vulkan acceleration is available. " + "Reports GPU name, VRAM, whether the Vulkan binary is installed, " + "and recommends the best Whisper model for your hardware. " + "Run this if you want to confirm GPU acceleration is working or diagnose why it isn't.", inputSchema: { type: "object", properties: {} }, }, { name: "list_models", description: "List all Whisper model files installed in your models directory. " + "Shows filename, size, whether it is currently active, quantization status, " + "and recommended use case for each model. " + "No network calls — reads local filesystem only.", inputSchema: { type: "object", properties: {} }, }, { name: "download_model", description: "Download a Whisper model from Hugging Face directly into your models directory. " + "Accepts a model name (e.g. large-v3-turbo, medium.en-q5_0) and handles the download automatically. " + "Downloads only from trusted Hugging Face namespaces (ggerganov/whisper.cpp and ggml-org). " + "After downloading, use switch_model to activate it for the current session.", inputSchema: { type: "object", properties: { model_name: { type: "string", description: "Model name to download, e.g. 'large-v3-turbo', 'medium.en-q5_0', 'large-v3-turbo-q5_0'. Use list_models to see what is already installed.", }, }, required: ["model_name"], }, }, { name: "switch_model", description: "Switch the active Whisper model for the current session without restarting Claude Desktop. " + "Accepts a model filename (e.g. ggml-large-v3-turbo.bin) or full path. " + "The model must already be installed in your models directory. " + "Use list_models to see installed models, download_model to add new ones. " + "Change is session-scoped — does not persist after Claude Desktop restarts.", inputSchema: { type: "object", properties: { model_name: { type: "string", description: "Model filename (e.g. ggml-large-v3-turbo.bin) or full path. Must be a .bin file in the configured models directory.", }, }, required: ["model_name"], }, }, ], }));