Skip to main content
Glama

auto_edit_to_music

Automatically edit video clips to match music beats in Adobe Premiere Pro. Cut footage to audio rhythm using beat detection for synchronized edits.

Instructions

Automatically creates an edit by cutting video clips to the beat of a music track.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
audioTrackIdYesThe ID of the audio track containing the music
videoClipIdsYesAn array of video clip IDs to use for the edit
editStyleYesThe desired editing style
sensitivityNoBeat detection sensitivity (0-100)

Implementation Reference

  • Input schema and description for the 'auto_edit_to_music' tool, defining parameters: audioTrackId (string), videoClipIds (array of strings), editStyle (enum: 'cuts_only', 'cuts_and_transitions', 'beat_sync'), sensitivity (optional number 0-100). This is part of the tool registry returned by getAvailableTools().
      name: 'auto_edit_to_music',
      description: 'Automatically creates an edit by cutting video clips to the beat of a music track.',
      inputSchema: z.object({
        audioTrackId: z.string().describe('The ID of the audio track containing the music'),
        videoClipIds: z.array(z.string()).describe('An array of video clip IDs to use for the edit'),
        editStyle: z.enum(['cuts_only', 'cuts_and_transitions', 'beat_sync']).describe('The desired editing style'),
        sensitivity: z.number().optional().describe('Beat detection sensitivity (0-100)')
      })
    },
  • Tool registration and dispatch in the executeTool switch statement, calling the handler with validated arguments.
    case 'auto_edit_to_music':
      return await this.autoEditToMusic(args.audioTrackId, args.videoClipIds, args.editStyle, args.sensitivity);
  • The core handler function for 'auto_edit_to_music'. Validates inputs, constructs and executes a Premiere Pro ExtendScript to detect beats in audio and sync video clips accordingly. Currently implements a placeholder that returns success without actual editing, noting need for advanced beat detection.
    private async autoEditToMusic(audioTrackId: string, videoClipIds: string[], editStyle: string, sensitivity = 50): Promise<any> {
      const script = `
        try {
          var audioTrack = app.project.getTrackByID("${audioTrackId}");
          var videoClips = [${videoClipIds.map(id => `app.project.getClipByID("${id}")`).join(', ')}];
          
          if (!audioTrack) {
            JSON.stringify({
              success: false,
              error: "Audio track not found"
            });
            return;
          }
          
          var validVideoClips = videoClips.filter(function(clip) { return clip !== null; });
          if (validVideoClips.length === 0) {
            JSON.stringify({
              success: false,
              error: "No valid video clips found"
            });
            return;
          }
          
          // This would require sophisticated beat detection and auto-editing algorithms
          // For now, return a placeholder response with the detected parameters
          JSON.stringify({
            success: true,
            message: "Auto-edit to music analysis completed",
            audioTrackId: "${audioTrackId}",
            videoClipCount: validVideoClips.length,
            editStyle: "${editStyle}",
            sensitivity: ${sensitivity},
            note: "This feature requires advanced beat detection implementation"
          });
        } catch (e) {
          JSON.stringify({
            success: false,
            error: e.toString()
          });
        }
      `;
      
      return await this.bridge.executeScript(script);
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'creates an edit' but doesn't specify whether this is a destructive operation, what permissions are required, how long it takes, or what the output looks like. For a tool that presumably generates new content, this lack of behavioral context is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that immediately conveys the core functionality without any wasted words. It's front-loaded with the main action and purpose, making it easy to understand at a glance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (automated editing with multiple parameters) and the absence of both annotations and an output schema, the description is insufficient. It doesn't explain what the tool returns, potential side effects, or error conditions, leaving significant gaps for an AI agent to understand how to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional meaning about parameters beyond implying they're used for beat-synced editing. This meets the baseline of 3 when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('automatically creates an edit'), the resource involved ('video clips'), and the method ('cutting to the beat of a music track'). It distinguishes itself from sibling tools like 'add_transition' or 'trim_clip' by focusing on automated music-synced editing rather than manual operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing existing audio tracks and video clips), nor does it compare to sibling tools like 'add_to_timeline' for manual editing or 'apply_effect' for other automated processes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hetpatel-11/Adobe_Premiere_Pro_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server