Skip to main content
Glama

design.search_by_image

Read-onlyIdempotent

Search visually similar design sections by uploading an image (base64 or URL). Optionally add a text query for hybrid search combining vision and text embeddings to find matching website components.

Instructions

画像から視覚的に類似したデザインセクションを検索します。Base64エンコード画像またはHTTPS画像URLを入力として受け付けます。DINOv2 visual embeddingを使用したHNSW検索で類似デザインを発見します。オプションのテキストクエリを指定すると、RRF 3-source融合(text 40% + vision 30% + fulltext 30%)でハイブリッド検索を実行します。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
imageYesBase64エンコードされた画像データ(data:image/...;base64,... 形式も可)またはHTTPS画像URL
queryNoオプションのテキストクエリ(ハイブリッド検索用、日本語/英語対応、1-500文字)
limitNo取得件数(1-50、デフォルト: 10)
min_similarityNo最小類似度閾値(0-1、デフォルト: 0.3)
section_typeNoセクションタイプフィルタ(hero, feature, cta, testimonial, pricing, footer等)

Implementation Reference

  • Main handler function for design.search_by_image. Accepts image (Base64/URL), generates DINOv2 visual embedding, performs HNSW vector search. If text query provided, uses RRF 3-source fusion (text 40% + vision 30% + fulltext 30%). Implements caching via SHA-256 image digest.
    /**
     * design.search_by_image ハンドラー
     */
    export async function designSearchByImageHandler(
      input: unknown
    ): Promise<DesignSearchByImageOutput> {
      const startTime = Date.now();
    
      // 入力バリデーション
      let parsed: DesignSearchByImageInput;
      try {
        parsed = designSearchByImageInputSchema.parse(input);
      } catch (error) {
        const message =
          error instanceof z.ZodError
            ? error.errors.map((e) => `${e.path.join(".")}: ${e.message}`).join("; ")
            : "Invalid input";
        return {
          success: false,
          results: [],
          total: 0,
          searchMode: "vision_only",
          error: `${DESIGN_SEARCH_ERROR_CODES.INVALID_INPUT}: ${message}`,
        };
      }
    
      // DINOv2サービス取得
      const dinov2Factory = dinov2ServiceDI.get();
      if (!dinov2Factory) {
        return {
          success: false,
          results: [],
          total: 0,
          searchMode: "vision_only",
          error: `${DESIGN_SEARCH_ERROR_CODES.SERVICE_UNAVAILABLE}: DINOv2 service not available`,
        };
      }
    
      // Prismaクライアント取得
      const prismaFactory = prismaClientDI.get();
      if (!prismaFactory) {
        return {
          success: false,
          results: [],
          total: 0,
          searchMode: "vision_only",
          error: `${DESIGN_SEARCH_ERROR_CODES.SERVICE_UNAVAILABLE}: Database not available`,
        };
      }
    
      const dinov2 = dinov2Factory();
      const prisma = prismaFactory();
    
      // キャッシュチェック / Cache check
      // Base64画像データ(最大10MB)のJSON.stringify CPUコスト回避のため、
      // imageフィールドのみSHA-256ダイジェストに置換してキャッシュキーを生成
      const imageDigest = createHash("sha256").update(parsed.image).digest("hex");
      const cacheKeyParams = { ...parsed, image: imageDigest } as unknown as Record<string, unknown>;
      const cacheKey = generateCacheKey("design.search_by_image", cacheKeyParams);
      const cachedResult = getCachedResult<DesignSearchByImageOutput>(cacheKey);
      if (cachedResult) {
        return cachedResult;
      }
    
      try {
        // ステップ1: 画像取得(URL or Base64)
        let imageBuffer: Buffer;
        try {
          if (isImageUrl(parsed.image)) {
            imageBuffer = await fetchImageFromUrl(parsed.image);
          } else {
            imageBuffer = decodeBase64Image(parsed.image);
          }
        } catch (error) {
          const code = isImageUrl(parsed.image)
            ? parsed.image.includes("blocked")
              ? DESIGN_SEARCH_ERROR_CODES.SSRF_BLOCKED
              : DESIGN_SEARCH_ERROR_CODES.IMAGE_FETCH_FAILED
            : DESIGN_SEARCH_ERROR_CODES.IMAGE_DECODE_FAILED;
          return {
            success: false,
            results: [],
            total: 0,
            searchMode: "vision_only",
            error: `${code}: ${sanitizeErrorMessage(error)}`,
          };
        }
    
        // ステップ2: 画像前処理(224x224x3 RGB)
        let preprocessedBuffer: Buffer;
        try {
          preprocessedBuffer = await preprocessImageForDINOv2(imageBuffer);
        } catch (error) {
          return {
            success: false,
            results: [],
            total: 0,
            searchMode: "vision_only",
            error: `${DESIGN_SEARCH_ERROR_CODES.IMAGE_DECODE_FAILED}: ${sanitizeErrorMessage(error)}`,
          };
        }
    
        // ステップ3: DINOv2でvisual embedding生成
        const embeddingStartTime = Date.now();
        let visionEmbedding: number[];
        try {
          if (!dinov2.initialized) {
            await dinov2.initialize();
          }
          visionEmbedding = await dinov2.generateEmbedding(preprocessedBuffer);
    
          // NaN/Infinity防御
          if (visionEmbedding.some((v) => !Number.isFinite(v))) {
            throw new Error("Generated embedding contains NaN or Infinity");
          }
        } catch (error) {
          return {
            success: false,
            results: [],
            total: 0,
            searchMode: "vision_only",
            error: `${DESIGN_SEARCH_ERROR_CODES.EMBEDDING_FAILED}: ${sanitizeErrorMessage(error)}`,
          };
        }
        const embeddingTimeMs = Date.now() - embeddingStartTime;
    
        // ステップ4: 検索実行
        const fetchLimit = parsed.limit * 3; // RRF用に多めに取得
    
        if (parsed.query) {
          // ハイブリッドRRF 3-source検索
          const embeddingFactory = embeddingServiceDI.get();
          if (!embeddingFactory) {
            // e5-baseが利用不可の場合、visionのみで検索
            logger.warn(
              "[design.search_by_image] EmbeddingService not available, falling back to vision-only"
            );
            const visionResults = await searchByVisionEmbedding(
              prisma,
              visionEmbedding,
              parsed.limit,
              parsed.min_similarity,
              parsed.section_type
            );
    
            const fallbackResult: DesignSearchByImageOutput = {
              success: true,
              results: visionResults,
              total: visionResults.length,
              searchMode: "vision_only",
              embeddingTimeMs,
            };
            setCachedResult(cacheKey, fallbackResult);
            return fallbackResult;
          }
    
          const embeddingService = embeddingFactory();
    
          // テキストembedding生成(e5-base、query:プレフィックス付き)
          const textEmbedding = await embeddingService.generateEmbedding(
            `query: ${parsed.query}`,
            "query"
          );
    
          if (!textEmbedding) {
            // テキストembedding生成失敗時もvisionのみで検索
            logger.warn(
              "[design.search_by_image] Text embedding generation failed, falling back to vision-only"
            );
            const visionResults = await searchByVisionEmbedding(
              prisma,
              visionEmbedding,
              parsed.limit,
              parsed.min_similarity,
              parsed.section_type
            );
    
            const textFallbackResult: DesignSearchByImageOutput = {
              success: true,
              results: visionResults,
              total: visionResults.length,
              searchMode: "vision_only",
              embeddingTimeMs,
            };
            setCachedResult(cacheKey, textFallbackResult);
            return textFallbackResult;
          }
    
          // 3-source並列検索
          const [textResults, visionResults, fulltextResults] = await Promise.all([
            searchByTextEmbedding(prisma, textEmbedding, fetchLimit, parsed.section_type),
            searchByVisionEmbedding(
              prisma,
              visionEmbedding,
              fetchLimit,
              0, // RRF前なのでminSimilarityは適用しない
              parsed.section_type
            ),
            searchByFulltext(prisma, parsed.query, fetchLimit, parsed.section_type),
          ]);
    
          // RRF融合: text(40%) + vision(30%) + fulltext(30%)
          const merged = mergeWithRRF3Source(textResults, visionResults, fulltextResults, {
            text: 0.4,
            vision: 0.3,
            fulltext: 0.3,
          });
    
          // minSimilarity適用 + limit
          const filtered = merged
            .filter((r) => r.similarity >= parsed.min_similarity)
            .slice(0, parsed.limit);
    
          const hybridResult: DesignSearchByImageOutput = {
            success: true,
            results: filtered,
            total: filtered.length,
            searchMode: "hybrid_rrf",
            embeddingTimeMs,
          };
          setCachedResult(cacheKey, hybridResult);
          return hybridResult;
        } else {
          // Vision-only検索
          const visionResults = await searchByVisionEmbedding(
            prisma,
            visionEmbedding,
            parsed.limit,
            parsed.min_similarity,
            parsed.section_type
          );
    
          const visionOnlyResult: DesignSearchByImageOutput = {
            success: true,
            results: visionResults,
            total: visionResults.length,
            searchMode: "vision_only",
            embeddingTimeMs,
          };
          setCachedResult(cacheKey, visionOnlyResult);
          return visionOnlyResult;
        }
      } catch (error) {
        logger.warn("[design.search_by_image] Search failed", {
          error: sanitizeErrorMessage(error),
        });
        return {
          success: false,
          results: [],
          total: 0,
          searchMode: "vision_only",
          error: `${DESIGN_SEARCH_ERROR_CODES.SEARCH_FAILED}: ${sanitizeErrorMessage(error)}`,
        };
      } finally {
        logger.info("[design.search_by_image] completed", {
          processingTimeMs: Date.now() - startTime,
        });
      }
    }
  • Zod input schema for design.search_by_image. Validates image (required), query (optional, 1-500 chars), limit (1-50, default 10), min_similarity (0-1, default 0.3), and section_type (optional).
    export const designSearchByImageInputSchema = z.object({
      image: z
        .string()
        .min(1)
        .describe(
          "Base64エンコードされた画像データ(data:image/...;base64,... 形式も可)またはHTTPS画像URL"
        ),
      query: z
        .string()
        .min(1)
        .max(500)
        .optional()
        .describe("オプションのテキストクエリ(ハイブリッド検索用、日本語/英語対応)"),
      limit: z.number().int().min(1).max(50).default(10).describe("取得件数(1-50、デフォルト: 10)"),
      min_similarity: z
        .number()
        .min(0)
        .max(1)
        .default(0.3)
        .describe("最小類似度閾値(0-1、デフォルト: 0.3)"),
      section_type: z
        .string()
        .optional()
        .describe("セクションタイプフィルタ(hero, feature, cta, testimonial, pricing, footer等)"),
    });
  • Tool definition including name, description, annotations (readOnly, idempotent), and JSON Schema input schema.
    export const designSearchByImageToolDefinition = {
      name: "design.search_by_image",
      description:
        "画像から視覚的に類似したデザインセクションを検索します。" +
        "Base64エンコード画像またはHTTPS画像URLを入力として受け付けます。" +
        "DINOv2 visual embeddingを使用したHNSW検索で類似デザインを発見します。" +
        "オプションのテキストクエリを指定すると、RRF 3-source融合(text 40% + vision 30% + fulltext 30%)でハイブリッド検索を実行します。",
      annotations: {
        title: "Design Search by Image",
        readOnlyHint: true,
        idempotentHint: true,
        openWorldHint: false,
      },
      inputSchema: {
        type: "object" as const,
        properties: {
          image: {
            type: "string",
            description:
              "Base64エンコードされた画像データ(data:image/...;base64,... 形式も可)またはHTTPS画像URL",
          },
          query: {
            type: "string",
            description: "オプションのテキストクエリ(ハイブリッド検索用、日本語/英語対応、1-500文字)",
            minLength: 1,
            maxLength: 500,
          },
          limit: {
            type: "number",
            description: "取得件数(1-50、デフォルト: 10)",
            minimum: 1,
            maximum: 50,
            default: 10,
          },
          min_similarity: {
            type: "number",
            description: "最小類似度閾値(0-1、デフォルト: 0.3)",
            minimum: 0,
            maximum: 1,
            default: 0.3,
          },
          section_type: {
            type: "string",
            description:
              "セクションタイプフィルタ(hero, feature, cta, testimonial, pricing, footer等)",
          },
        },
        required: ["image"],
      },
    };
  • Handler registration in the toolHandlers map, mapping tool name to handler function.
    // design.search_by_image(画像からの類似デザイン検索)
    "design.search_by_image": designSearchByImageHandler,
  • Re-exports of all design.search_by_image components (handler, schema, DI factories, error codes) from the tools index.
    // design.search_by_image ツール(画像からの類似デザイン検索)
    export {
      designSearchByImageHandler,
      designSearchByImageToolDefinition,
      designSearchByImageInputSchema,
      setDesignSearchDINOv2ServiceFactory,
      resetDesignSearchDINOv2ServiceFactory,
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. Description adds technical detail (HNSW, DINOv2, fusion weights) and hybrid search behavior, enriching beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose and input types, succinctly covering technical method and optional hybrid search. No redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers input, optional text, search method, and technical details. No output schema, but description doesn't need return format. Lacks error handling or pagination, but sufficient for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%. Description adds context for image (Base64/URL), query (hybrid search explanation), limit, and min_similarity. Adds value beyond schema, though section_type lacks further detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it searches for visually similar design sections from an image, detailing input formats (Base64 or HTTPS URL) and technology (DINOv2, HNSW). Distinct from siblings like 'design.similar_site' or 'layout.search'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through hybrid search description but lacks explicit when-to-use vs siblings like 'design.similar_site' or 'background.search'. No when-not or alternative names provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TKMD/ReftrixMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server