Skip to main content
Glama

findSimilarFrames

Identify visually similar frames in videos using Apple Vision image feature prints. Provide a reference frame to find matching scenes based on visual content analysis.

Instructions

Find frames that visually look like a reference frame using Apple Vision image feature prints. Accepts a frame assetId or a direct framePath and returns image-backed matches.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
assetIdNoReference keyframe asset ID
framePathNoReference image path on disk
videoIdOrUrlNoOptional video scope for similarity search
maxResultsNo
minSimilarityNo
dryRunNo

Implementation Reference

  • The findSimilarFrames method within the VisualSearchEngine class, which performs the similarity search using Apple Vision feature vectors and cosine similarity.
    async findSimilarFrames(params: FindSimilarFramesParams): Promise<FindSimilarFramesResult> {
      const reference = await this.resolveReferenceFrame(params);
      const referenceVector = reference.featureVector;
      if (!referenceVector || referenceVector.length === 0) {
        throw new Error("Reference frame does not have an Apple Vision feature vector. Re-index the frame or provide a valid image path.");
      }
    
      const candidates = this.store.listFrames({ videoId: params.videoId }).filter((frame) => {
        if (!existsSync(frame.framePath)) return false;
        if (!frame.featureVector || frame.featureVector.length === 0) return false;
        if (frame.framePath === reference.framePath) return false;
        return true;
      });
    
      const minSimilarity = params.minSimilarity ?? 0.7;
      const results = candidates
        .map((frame) => ({ frame, similarity: cosineSimilarity(referenceVector, frame.featureVector ?? []) }))
        .filter((item) => item.similarity >= minSimilarity)
        .sort((a, b) => b.similarity - a.similarity)
        .slice(0, clamp(params.maxResults ?? 5, 1, 20))
        .map(({ frame, similarity }) => ({
          similarity: round(similarity, 4),
          videoId: frame.videoId,
          sourceVideoUrl: frame.sourceVideoUrl,
          sourceVideoTitle: frame.sourceVideoTitle,
          frameAssetId: frame.frameAssetId,
          framePath: frame.framePath,
          timestampSec: frame.timestampSec,
          timestampLabel: formatTimestamp(frame.timestampSec),
          explanation: `Apple Vision feature-print similarity ${round(similarity, 3)}${frame.visualDescription ? ` • ${truncate(frame.visualDescription, 140)}` : ""}`,
          ocrText: frame.ocrText,
          visualDescription: frame.visualDescription,
        } satisfies SimilarFrameMatch));
    
      return {
        reference: {
          assetId: params.assetId,
          framePath: reference.framePath,
          videoId: reference.videoId,
        },
        results,
        searchedFrames: candidates.length,
        limitations: [
          "Similarity is image-to-image only. It finds frames that look alike using Apple Vision feature prints.",
          "Similarity search does not understand transcript text. It only compares visual frame features.",
        ],
      };
    }
  • Definition of the FindSimilarFramesResult interface describing the output structure.
    export interface FindSimilarFramesResult {
      reference: {
        assetId?: string;
        framePath: string;
        videoId?: string;
  • Definition of the FindSimilarFramesParams interface describing the input requirements.
    export interface FindSimilarFramesParams {
      assetId?: string;
      framePath?: string;
      videoId?: string;
      maxResults?: number;
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the method (Apple Vision image feature prints) and that it 'returns image-backed matches,' but lacks critical details: whether this is a read-only operation, performance characteristics, error conditions, or authentication requirements. For a tool with 6 parameters and no annotations, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, consisting of two clear sentences. The first sentence states the core functionality, and the second covers input/output basics. There's no wasted verbiage, though it could be slightly more structured (e.g., separating input and output details).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no annotations, no output schema), the description is incomplete. It lacks behavioral context, detailed parameter explanations, output format description, and usage guidelines. For a visual similarity search tool with multiple configuration options, this leaves too many unknowns for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (3 of 6 parameters have descriptions). The description adds minimal parameter semantics: it clarifies that inputs can be 'assetId or a direct framePath' and mentions 'similarity search,' but doesn't explain the purpose of videoIdOrUrl, maxResults, minSimilarity, or dryRun. With moderate schema coverage, the description provides some context but doesn't fully compensate for undocumented parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find frames that visually look like a reference frame using Apple Vision image feature prints.' It specifies the verb (find), resource (frames), and method (Apple Vision image feature prints). However, it doesn't explicitly differentiate from sibling tools like 'searchVisualContent' or 'extractKeyframes', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions 'returns image-backed matches' but doesn't specify use cases, prerequisites, or exclusions. With many sibling tools available (e.g., searchVisualContent, extractKeyframes), the lack of comparative context leaves the agent guessing about appropriate selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rajanrengasamy/vidlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server