Skip to main content
Glama

transcribe_audio

Transcribe audio projects locally using faster-whisper offline transcription. Automatically loads models, runs in background, and saves transcripts for editing in Audacity.

Instructions

[EXPERIMENTAL] Transcribe the entire project audio using faster-whisper (local, offline). Requires separate setup — see installation guide. If this fails, tell the user transcription is experimental and point them to the Transcription Setup docs.

Runs in BACKGROUND — returns a job_id immediately. Use check_transcription_status to monitor progress. Poll every 10-15 seconds.

Do NOT call transcription_set_model first — this handles model loading automatically.

After transcription completes, TELL the user where the transcript was saved or offer to save it. Always tell the user the file location so they can find it.

Args: model_size: Whisper model - "tiny", "base", "small", "medium", "large-v3". Default: "small" language: ISO language code (e.g. "en", "fr") or None for auto-detect task: "transcribe" or "translate" (translate converts any language to English)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
model_sizeNosmall
languageNo
taskNotranscribe
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xDarkzx/Audacity-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server