Skip to main content
Glama
DeveloperZo

MCP Audio Tweaker

by DeveloperZo

create_harmonics

Generate harmonic variations for audio files by adding octaves and musical intervals to create richer sound textures.

Instructions

Create harmonic variations by adding octaves and musical intervals

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
inputFileYesPath to input audio file
outputDirectoryYesDirectory for harmonic variations
octaveUpNoMix level for octave up (0-1)
octaveDownNoMix level for octave down (0-1)
fifthUpNoMix level for perfect fifth up (0-1)
thirdUpNoMix level for major third up (0-1)
overwriteNoWhether to overwrite existing output files

Implementation Reference

  • Tool definition including name, description, and input schema for validating parameters like inputFile, outputDirectory, and harmonic mix levels.
    export const createHarmonicsTool: Tool = {
      name: 'create_harmonics',
      description: 'Create harmonic variations by adding octaves and musical intervals',
      inputSchema: {
        type: 'object',
        properties: {
          inputFile: {
            type: 'string',
            description: 'Path to input audio file'
          },
          outputDirectory: {
            type: 'string',
            description: 'Directory for harmonic variations'
          },
          octaveUp: {
            type: 'number',
            description: 'Mix level for octave up (0-1)',
            minimum: 0,
            maximum: 1,
            optional: true
          },
          octaveDown: {
            type: 'number',
            description: 'Mix level for octave down (0-1)',
            minimum: 0,
            maximum: 1,
            optional: true
          },
          fifthUp: {
            type: 'number',
            description: 'Mix level for perfect fifth up (0-1)',
            minimum: 0,
            maximum: 1,
            optional: true
          },
          thirdUp: {
            type: 'number',
            description: 'Mix level for major third up (0-1)',
            minimum: 0,
            maximum: 1,
            optional: true
          },
          overwrite: {
            type: 'boolean',
            description: 'Whether to overwrite existing output files',
            default: false
          }
        },
        required: ['inputFile', 'outputDirectory']
      }
    };
  • MCP tool handler that parses arguments, constructs harmonics config, calls AdvancedAudioProcessor.createHarmonicVariations, and returns formatted results.
    case 'create_harmonics': {
      const input = args as any;
      const harmonics = {
        octaveUp: input.octaveUp,
        octaveDown: input.octaveDown,
        fifthUp: input.fifthUp,
        thirdUp: input.thirdUp
      };
      
      const results = await advancedProcessor.createHarmonicVariations(
        input.inputFile,
        input.outputDirectory,
        harmonics,
        input.overwrite || false
      );
      
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify({
              success: true,
              harmonicsCreated: results.length,
              results: results
            }, null, 2)
          }
        ]
      };
    }
  • Core implementation generating harmonic files by pitch-shifting input and layering original with shifted version at specified mix levels using FFmpeg operations.
    async createHarmonicVariations(
      inputFile: string,
      outputDirectory: string,
      harmonics: HarmonicsOperation,
      overwrite: boolean = false
    ): Promise<ProcessingResult[]> {
      const results: ProcessingResult[] = [];
      const baseName = path.parse(inputFile).name;
      
      const harmonicIntervals = [
        { name: 'octave_up', semitones: 12, mix: harmonics.octaveUp },
        { name: 'octave_down', semitones: -12, mix: harmonics.octaveDown },
        { name: 'fifth_up', semitones: 7, mix: harmonics.fifthUp },
        { name: 'third_up', semitones: 4, mix: harmonics.thirdUp }
      ];
      
      for (const interval of harmonicIntervals) {
        if (interval.mix && interval.mix > 0) {
          const outputFile = path.join(outputDirectory, 
            `${baseName}_${interval.name}${path.extname(inputFile)}`
          );
          
          const operations: AudioOperations = {
            advanced: {
              pitch: { semitones: interval.semitones },
              layering: {
                layers: [
                  { blend: 'mix', volume: 1 - interval.mix },
                  { blend: 'add', volume: interval.mix, pitch: interval.semitones }
                ]
              }
            }
          };
          
          const result = await this.processAudioFile(
            inputFile,
            outputFile,
            operations,
            overwrite
          );
          
          results.push(result);
        }
      }
      
      return results;
    }
  • Exports array of all tools including createHarmonicsTool for server registration.
    export const tools = [
      processAudioFileTool,
      batchProcessAudioTool,
      applyPresetTool,
      listPresetsTool,
      getQueueStatusTool,
      generateVariationsTool,
      createHarmonicsTool,
      advancedProcessTool,
      layerSoundsTool
    ];
  • Registers the central tool request handler with switch statement covering create_harmonics case.
    export function registerTools(server: Server): void {
      // Register process_audio_file tool
      server.setRequestHandler(CallToolRequestSchema, async (request) => {
        const { name, arguments: args } = request.params;
        
        try {
          switch (name) {
            case 'process_audio_file': {
              try {
                const input = ProcessAudioFileInputSchema.parse(args);
                const result = await audioProcessor.processAudioFile(
                  input.inputFile,
                  input.outputFile,
                  input.operations,
                  (args as any).overwrite || false
                );
                
                return {
                  content: [
                    {
                      type: 'text',
                      text: JSON.stringify(result, null, 2)
                    }
                  ]
                };
              } catch (validationError) {
                // If validation fails, try with the advanced processor
                const result = await advancedProcessor.processAudioFile(
                  (args as any).inputFile,
                  (args as any).outputFile,
                  (args as any).operations,
                  (args as any).overwrite || false
                );
                
                return {
                  content: [
                    {
                      type: 'text',
                      text: JSON.stringify(result, null, 2)
                    }
                  ]
                };
              }
            }
            
            case 'batch_process_audio': {
              try {
                const input = BatchProcessAudioInputSchema.parse(args);
                const result = await audioProcessor.batchProcessAudio(
                  { directory: input.inputDirectory, pattern: input.filePattern },
                  { directory: input.outputDirectory },
                  input.operations,
                  (args as any).overwrite || false
                );
                
                return {
                  content: [
                    {
                      type: 'text',
                      text: JSON.stringify(result, null, 2)
                    }
                  ]
                };
              } catch (validationError) {
                // If validation fails, try with the advanced processor
                const result = await advancedProcessor.batchProcessAudio(
                  { directory: (args as any).inputDirectory, pattern: (args as any).filePattern },
                  { directory: (args as any).outputDirectory },
                  (args as any).operations,
                  (args as any).overwrite || false
                );
                
                return {
                  content: [
                    {
                      type: 'text',
                      text: JSON.stringify(result, null, 2)
                    }
                  ]
                };
              }
            }
            
            case 'apply_preset': {
              const input = ApplyPresetInputSchema.parse(args);
              const preset = getPreset(input.preset);
              
              const result = await audioProcessor.processAudioFile(
                input.inputFile,
                input.outputFile,
                preset.operations,
                (args as any).overwrite || false
              );
              
              return {
                content: [
                  {
                    type: 'text',
                    text: JSON.stringify({
                      ...result,
                      presetUsed: preset.name,
                      presetDescription: preset.description
                    }, null, 2)
                  }
                ]
              };
            }
            
            case 'list_presets': {
              const { listPresets, getPresetsByCategory } = await import('../utils/presets.js');
              const category = (args as any)?.category;
              
              const presets = category ? getPresetsByCategory(category) : listPresets();
              
              return {
                content: [
                  {
                    type: 'text',
                    text: JSON.stringify(presets, null, 2)
                  }
                ]
              };
            }
            
            case 'get_queue_status': {
              const status = audioProcessor.getQueueStatus();
              
              return {
                content: [
                  {
                    type: 'text',
                    text: JSON.stringify(status, null, 2)
                  }
                ]
              };
            }
            
            case 'generate_variations': {
              const input = args as any;
              const variations = {
                count: input.count || 5,
                pitchRange: input.pitchRange || 2,
                volumeRange: input.volumeRange || 3,
                spectralRange: input.spectralRange || 2,
                seed: input.seed
              };
              
              const results = await advancedProcessor.generateVariations(
                input.inputFile,
                input.outputDirectory,
                variations,
                undefined,
                input.overwrite || false
              );
              
              return {
                content: [
                  {
                    type: 'text',
                    text: JSON.stringify({
                      success: true,
                      variationsGenerated: results.length,
                      results: results
                    }, null, 2)
                  }
                ]
              };
            }
            
            case 'create_harmonics': {
              const input = args as any;
              const harmonics = {
                octaveUp: input.octaveUp,
                octaveDown: input.octaveDown,
                fifthUp: input.fifthUp,
                thirdUp: input.thirdUp
              };
              
              const results = await advancedProcessor.createHarmonicVariations(
                input.inputFile,
                input.outputDirectory,
                harmonics,
                input.overwrite || false
              );
              
              return {
                content: [
                  {
                    type: 'text',
                    text: JSON.stringify({
                      success: true,
                      harmonicsCreated: results.length,
                      results: results
                    }, null, 2)
                  }
                ]
              };
            }
            
            case 'advanced_process': {
              const input = args as any;
              const operations = {
                advanced: {
                  pitch: input.pitch,
                  tempo: input.tempo,
                  spectral: input.spectral,
                  dynamics: input.dynamics,
                  spatial: input.spatial
                }
              };
              
              const result = await advancedProcessor.processAudioFile(
                input.inputFile,
                input.outputFile,
                operations,
                input.overwrite || false
              );
              
              return {
                content: [
                  {
                    type: 'text',
                    text: JSON.stringify(result, null, 2)
                  }
                ]
              };
            }
            
            case 'layer_sounds': {
              const input = args as any;
              const layering = {
                layers: input.layers
              };
              
              const result = await advancedProcessor.layerSounds(
                input.inputFiles,
                input.outputFile,
                layering,
                input.overwrite || false
              );
              
              return {
                content: [
                  {
                    type: 'text',
                    text: JSON.stringify(result, null, 2)
                  }
                ]
              };
            }
            
            default:
              throw new Error(`Unknown tool: ${name}`);
          }
        } catch (error) {
          logger.error(`Tool execution failed: ${(error as Error).message}`);
          
          return {
            content: [
              {
                type: 'text',
                text: JSON.stringify({
                  error: {
                    code: 'TOOL_EXECUTION_FAILED',
                    message: (error as Error).message,
                    tool: name
                  }
                }, null, 2)
              }
            ],
            isError: true
          };
        }
      });
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'create' implies a write operation, it doesn't specify whether this is a destructive or safe process, what permissions might be required, or how errors are handled. The description lacks details on output behavior, file formats, or any side effects, which is a significant gap for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without any fluff or redundancy. It's front-loaded with the core action and method, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 7 parameters, no annotations, and no output schema, the description is inadequate. It doesn't explain what 'harmonic variations' output looks like (e.g., file types, naming conventions), how the process works, or any prerequisites. Given the complexity and lack of structured data, more context is needed for the agent to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter information beyond what's in the schema, such as explaining how the mix levels interact or what 'harmonic variations' entail. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('create harmonic variations') and the method ('by adding octaves and musical intervals'), which is specific and actionable. However, it doesn't explicitly differentiate this tool from sibling tools like 'generate_variations' or 'process_audio_file', which might have overlapping functionality in audio processing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'generate_variations', 'process_audio_file', and 'advanced_process', there's no indication of what makes this tool unique or when it's the appropriate choice, leaving the agent to guess based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DeveloperZo/mcp-audio-tweaker'

If you have feedback or need assistance with the MCP directory API, please join our Discord server