Skip to main content
Glama

searchVisualContent

Search video content visually to find specific frames using OCR and AI descriptions. Returns matching images with timestamps for evidence-based discovery.

Instructions

Search the actual visual content of a video or your indexed frame library. Uses Apple Vision OCR, optional Gemini frame descriptions, and optional Gemini semantic embeddings. Always returns frame/image evidence with timestamps.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesVisual search query, e.g. 'whiteboard diagram' or 'slide that says title research checklist'
videoIdOrUrlNoOptional video scope. If provided, the server can auto-index this video if needed.
maxResultsNo
minScoreNo
autoIndexIfNeededNoIf scoped to a video and no visual index exists yet, build it automatically (default true)
intervalSecNoFrame interval to use if auto-indexing is triggered
maxFramesNoFrame cap to use if auto-indexing is triggered
imageFormatNo
widthNo
autoDownloadNo
downloadFormatNo
includeGeminiDescriptionsNo
includeGeminiEmbeddingsNo
dryRunNo

Implementation Reference

  • The handler for "searchVisualContent", implemented as 'searchText' inside the 'VisualSearchEngine' class. It performs visual search by querying the indexed SQLite database using a combination of lexical matching (OCR/description) and semantic embeddings.
    async searchText(params: SearchVisualContentParams): Promise<SearchVisualContentResult> {
      const rawQuery = params.query?.trim();
      const normalizedQuery = normalizeText(rawQuery);
      if (!normalizedQuery) {
        throw new Error("query cannot be empty");
      }
    
      if (params.videoId && (params.autoIndexIfNeeded ?? true) && this.needsIndexing(params.videoId)) {
        await this.indexVideo({
          videoId: params.videoId,
          ...(params.indexIfNeeded ?? {}),
        });
      }
    
      const frames = this.store.listSearchFrames({ videoId: params.videoId }).filter((frame) => existsSync(frame.framePath));
      if (frames.length === 0) {
        throw new Error("No indexed visual frames found. Run indexVisualContent first, or provide videoIdOrUrl so search can auto-index it.");
      }
    
      const embeddingSummary = summarizeEmbeddingProvider(frames);
      let semanticQueryEmbedding: number[] | undefined;
    
      if (embeddingSummary.provider !== "none") {
        const selection: EmbeddingSelection = {
          kind: "gemini",
          model: embeddingSummary.model,
          dimensions: embeddingSummary.dimensions,
        };
        const cacheKey = buildEmbeddingCacheKey(rawQuery ?? normalizedQuery, selection);
        semanticQueryEmbedding = this.queryEmbeddingCache.get(cacheKey);
        if (!semanticQueryEmbedding) {
          const provider = await createEmbeddingProvider(selection);
          semanticQueryEmbedding = provider ? await provider.embedQuery(rawQuery ?? normalizedQuery) : undefined;
          if (semanticQueryEmbedding?.length) {
            this.queryEmbeddingCache.set(cacheKey, semanticQueryEmbedding);
          }
        }
      }
    
      const results = frames
        .map((frame) => scoreFrameAgainstQuery({ query: normalizedQuery, rawQuery: rawQuery ?? normalizedQuery, frame, semanticQueryEmbedding }))
        .filter((item) => item.score >= (params.minScore ?? 0.12))
        .sort((a, b) => b.score - a.score || b.semanticScore! - a.semanticScore! || b.lexicalScore - a.lexicalScore)
        .slice(0, clamp(params.maxResults ?? 5, 1, 20));
    
      // Compute coverage hints when scoped to a single video
      let coveredTimeRange: { startSec: number; endSec: number } | undefined;
      let needsExpansion: boolean | undefined;
      if (params.videoId) {
        const range = this.store.getFrameTimeRange(params.videoId);
        if (range) {
          coveredTimeRange = { startSec: range.minSec, endSec: range.maxSec };
          const videoAsset = this.findVideoAsset(params.videoId);
          const videoDuration = videoAsset?.durationSec;
          if (videoDuration && videoDuration > 0) {
            const coverage = (range.maxSec - range.minSec) / videoDuration;
            needsExpansion = coverage < 0.5;
          }
        }
      }
    
      return {
        query: rawQuery ?? normalizedQuery,
        results,
        searchedFrames: frames.length,
        searchedVideos: new Set(frames.map((frame) => frame.videoId)).size,
        descriptionProvider: summarizeDescriptionProvider(frames),
        embeddingProvider: embeddingSummary.provider,
        embeddingModel: embeddingSummary.model,
        queryMode: semanticQueryEmbedding ? "gemini_semantic_plus_lexical" : "ocr_description_lexical",
        coveredTimeRange,
        needsExpansion,
        limitations: buildSearchLimitations(summarizeDescriptionProvider(frames), embeddingSummary.provider),
      };
    }
  • The MCP tool definition and input schema registration for "searchVisualContent" in 'src/server/mcp-server.ts'.
      name: "searchVisualContent",
      description: "Search the actual visual content of a video or your indexed frame library. Uses Apple Vision OCR, optional Gemini frame descriptions, and optional Gemini semantic embeddings. Always returns frame/image evidence with timestamps.",
      inputSchema: {
        type: "object",
        properties: {
          query: { type: "string", description: "Visual search query, e.g. 'whiteboard diagram' or 'slide that says title research checklist'" },
          videoIdOrUrl: { type: "string", description: "Optional video scope. If provided, the server can auto-index this video if needed." },
          maxResults: { type: "number", minimum: 1, maximum: 20 },
          minScore: { type: "number", minimum: 0, maximum: 1 },
          autoIndexIfNeeded: { type: "boolean", description: "If scoped to a video and no visual index exists yet, build it automatically (default true)" },
          intervalSec: { type: "number", minimum: 2, maximum: 3600, description: "Frame interval to use if auto-indexing is triggered" },
          maxFrames: { type: "number", minimum: 1, maximum: 100, description: "Frame cap to use if auto-indexing is triggered" },
          imageFormat: { type: "string", enum: ["jpg", "png", "webp"] },
          width: { type: "number", minimum: 160, maximum: 3840 },
          autoDownload: { type: "boolean" },
          downloadFormat: { type: "string", enum: ["best_video", "worst_video"] },
          includeGeminiDescriptions: { type: "boolean" },
          includeGeminiEmbeddings: { type: "boolean" },
          dryRun: { type: "boolean" },
        },
        required: ["query"],
        additionalProperties: false,
      },
    },
  • Type definitions for the parameters and results of the "searchVisualContent" tool.
    export interface SearchVisualContentParams {
      query: string;
      videoId?: string;
      maxResults?: number;
      minScore?: number;
      autoIndexIfNeeded?: boolean;
      indexIfNeeded?: Omit<IndexVisualContentParams, "videoId">;
    }
    
    export interface SearchVisualMatch {
      score: number;
      lexicalScore: number;
      semanticScore?: number;
      matchedOn: Array<"ocr" | "description" | "semantic">;
      videoId: string;
      sourceVideoUrl: string;
      sourceVideoTitle?: string;
      frameAssetId?: string;
      framePath: string;
      timestampSec: number;
      timestampLabel: string;
      explanation: string;
      ocrText?: string;
      visualDescription?: string;
    }
    
    export interface SearchVisualContentResult {
      query: string;
      results: SearchVisualMatch[];
      searchedFrames: number;
      searchedVideos: number;
      descriptionProvider: "none" | "gemini" | "mixed";
      embeddingProvider: "none" | "gemini" | "mixed";
      embeddingModel?: string;
      queryMode: "ocr_description_lexical" | "gemini_semantic_plus_lexical";
      coveredTimeRange?: { startSec: number; endSec: number };
      needsExpansion?: boolean;
      limitations: string[];
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behaviors: it uses specific AI technologies (Apple Vision OCR, Gemini), can auto-index videos if needed, and always returns frame/image evidence with timestamps. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence defines purpose and technologies, second sentence specifies output behavior. Every word adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 14 parameters, no annotations, and no output schema, the description provides good purpose and behavioral context but lacks details about return format, error handling, or performance characteristics. It's adequate but leaves gaps given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 36%, but the description compensates by explaining the core functionality (searching visual content) and mentioning auto-indexing behavior. While it doesn't detail individual parameters, it provides essential context about what the tool does with parameters like videoIdOrUrl and autoIndexIfNeeded.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches visual content of videos or indexed frame libraries using specific technologies (Apple Vision OCR, Gemini). It distinguishes from siblings by focusing on visual content search rather than analysis, indexing, or text-based search tools like searchTranscripts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for visual content searches but doesn't explicitly state when to use this versus alternatives like findSimilarFrames or indexVisualContent. It mentions returning frame/image evidence with timestamps, which suggests use cases requiring visual proof, but lacks explicit guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rajanrengasamy/vidlens-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server