Skip to main content
Glama

audio_recognition

Analyze and transcribe audio files using Google Gemini AI. Provide a filepath and optional prompts or models for accurate content recognition and transcription.

Instructions

Analyze and transcribe audio using Google Gemini AI

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filepathYesPath to the media file to analyze
modelnameNoGemini model to use for recognitiongemini-2.0-flash
promptNoCustom prompt for the recognitionDescribe this content

Implementation Reference

  • The callback function implementing the audio_recognition tool's core logic: file validation, upload to Gemini service, processing with optional prompt and model, error handling, and returning structured CallToolResult.
    callback: async (args: AudioRecognitionParams): Promise<CallToolResult> => {
      try {
        log.info(`Processing audio recognition request for file: ${args.filepath}`);
        log.verbose('Audio recognition request', JSON.stringify(args));
        
        // Verify file exists
        if (!fs.existsSync(args.filepath)) {
          throw new Error(`Audio file not found: ${args.filepath}`);
        }
        
        // Verify file is an audio
        const ext = path.extname(args.filepath).toLowerCase();
        if (!['.mp3', '.wav', '.ogg'].includes(ext)) {
          throw new Error(`Unsupported audio format: ${ext}. Supported formats are: .mp3, .wav, .ogg`);
        }
        
        // Default prompt if not provided
        const prompt = args.prompt || 'Describe this audio';
        const modelName = args.modelname || 'gemini-2.0-flash';
        
        // Upload the file
        log.info('Uploading audio file...');
        const file = await geminiService.uploadFile(args.filepath);
        
        // Process with Gemini
        log.info('Generating content from audio...');
        const result = await geminiService.processFile(file, prompt, modelName);
        
        if (result.isError) {
          log.error(`Error in audio recognition: ${result.text}`);
          return {
            content: [
              {
                type: 'text',
                text: result.text
              }
            ],
            isError: true
          };
        }
        
        log.info('Audio recognition completed successfully');
        log.verbose('Audio recognition result', JSON.stringify(result));
        
        return {
          content: [
            {
              type: 'text',
              text: result.text
            }
          ]
        };
      } catch (error) {
        log.error('Error in audio recognition tool', error);
        const errorMessage = error instanceof Error ? error.message : String(error);
        
        return {
          content: [
            {
              type: 'text',
              text: `Error processing audio: ${errorMessage}`
            }
          ],
          isError: true
        };
      }
    }
  • Defines the input schema for audio_recognition tool using Zod: common RecognitionParamsSchema extended to AudioRecognitionParamsSchema with filepath, optional prompt, and modelname.
    export const RecognitionParamsSchema = z.object({
      filepath: z.string().describe('Path to the media file to analyze'),
      prompt: z.string().default('Describe this content').describe('Custom prompt for the recognition'),
      modelname: z.string().default('gemini-2.0-flash').describe('Gemini model to use for recognition')
    });
    
    export type RecognitionParams = z.infer<typeof RecognitionParamsSchema>;
    
    /**
     * Video recognition specific types
     */
    export const VideoRecognitionParamsSchema = RecognitionParamsSchema.extend({});
    export type VideoRecognitionParams = z.infer<typeof VideoRecognitionParamsSchema>;
    
    /**
     * Image recognition specific types
     */
    export const ImageRecognitionParamsSchema = RecognitionParamsSchema.extend({});
    export type ImageRecognitionParams = z.infer<typeof ImageRecognitionParamsSchema>;
    
    /**
     * Audio recognition specific types
     */
    export const AudioRecognitionParamsSchema = RecognitionParamsSchema.extend({});
    export type AudioRecognitionParams = z.infer<typeof AudioRecognitionParamsSchema>;
  • src/server.ts:54-70 (registration)
    Creates the audio_recognition tool instance and registers it with the MCP server using mcpServer.tool().
    const audioRecognitionTool = createAudioRecognitionTool(this.geminiService);
    const videoRecognitionTool = createVideoRecognitionTool(this.geminiService);
    
    // Register tools with MCP server
    this.mcpServer.tool(
      imageRecognitionTool.name,
      imageRecognitionTool.description,
      imageRecognitionTool.inputSchema.shape,
      imageRecognitionTool.callback
    );
    
    this.mcpServer.tool(
      audioRecognitionTool.name,
      audioRecognitionTool.description,
      audioRecognitionTool.inputSchema.shape,
      audioRecognitionTool.callback
    );
  • Factory function that creates the tool definition object with name, description, schema, and handler callback for audio_recognition.
    export const createAudioRecognitionTool = (geminiService: GeminiService) => {
      return {
        name: 'audio_recognition',
        description: 'Analyze and transcribe audio using Google Gemini AI',
        inputSchema: AudioRecognitionParamsSchema,
        callback: async (args: AudioRecognitionParams): Promise<CallToolResult> => {
          try {
            log.info(`Processing audio recognition request for file: ${args.filepath}`);
            log.verbose('Audio recognition request', JSON.stringify(args));
            
            // Verify file exists
            if (!fs.existsSync(args.filepath)) {
              throw new Error(`Audio file not found: ${args.filepath}`);
            }
            
            // Verify file is an audio
            const ext = path.extname(args.filepath).toLowerCase();
            if (!['.mp3', '.wav', '.ogg'].includes(ext)) {
              throw new Error(`Unsupported audio format: ${ext}. Supported formats are: .mp3, .wav, .ogg`);
            }
            
            // Default prompt if not provided
            const prompt = args.prompt || 'Describe this audio';
            const modelName = args.modelname || 'gemini-2.0-flash';
            
            // Upload the file
            log.info('Uploading audio file...');
            const file = await geminiService.uploadFile(args.filepath);
            
            // Process with Gemini
            log.info('Generating content from audio...');
            const result = await geminiService.processFile(file, prompt, modelName);
            
            if (result.isError) {
              log.error(`Error in audio recognition: ${result.text}`);
              return {
                content: [
                  {
                    type: 'text',
                    text: result.text
                  }
                ],
                isError: true
              };
            }
            
            log.info('Audio recognition completed successfully');
            log.verbose('Audio recognition result', JSON.stringify(result));
            
            return {
              content: [
                {
                  type: 'text',
                  text: result.text
                }
              ]
            };
          } catch (error) {
            log.error('Error in audio recognition tool', error);
            const errorMessage = error instanceof Error ? error.message : String(error);
            
            return {
              content: [
                {
                  type: 'text',
                  text: `Error processing audio: ${errorMessage}`
                }
              ],
              isError: true
            };
          }
        }
      };
    };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'analyze and transcribe' but fails to describe key traits such as processing time, error handling, output format, or any limitations (e.g., file size, supported audio formats). This leaves significant gaps for a tool that performs AI-based analysis.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is appropriately sized and front-loaded, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of AI-based audio analysis, no annotations, and no output schema, the description is incomplete. It lacks details on behavioral aspects, output structure, and usage context, which are critical for an agent to effectively invoke this tool without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with clear descriptions for all parameters (filepath, modelname, prompt). The description adds no additional semantic context beyond what the schema provides, such as examples or constraints, so it meets the baseline for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'analyze and transcribe' and the resource 'audio' with the technology 'Google Gemini AI', making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'image_recognition' or 'video_recognition' beyond the audio focus, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like sibling tools or other audio processing methods. It lacks context about use cases, prerequisites, or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mario-andreschak/mcp_video_recognition'

If you have feedback or need assistance with the MCP directory API, please join our Discord server