Skip to main content
Glama

asset_ingest_external

Ingest externally generated images from tools like Midjourney or Ideogram, then automatically run matting, vectorization, and tier-0 validation to prepare assets for multi-platform bundling.

Instructions

Ingest an image the user generated in an external tool (Midjourney, Nano Banana, Ideogram web, Recraft, Flux Playground, etc.) and run the matte → vectorize (where applicable) → tier-0 validation pipeline. The round-trip endpoint for external_prompt_only mode.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
image_pathYesAbsolute path to the locally-saved image.
asset_typeYes
brand_bundleNo
expected_textNo
vectorNo
transparentNo
output_dirNo

Implementation Reference

  • Main handler function `ingestExternal` for the `asset_ingest_external` tool. Reads the image from disk, applies restoration pre-pass (re-encode lossy → PNG), matte (transparency), vectorize (raster→SVG), tier-0 validation, and returns a content-addressed AssetBundle.
    export async function ingestExternal(input: IngestExternalInputT): Promise<AssetBundle> {
      // Path-guard both the input (read) and the output (write) surface. These
      // come straight from an untrusted MCP caller; without the guard, a crafted
      // image_path lets the tool read arbitrary files, and a crafted output_dir
      // lets it write anywhere on disk. See src/security/paths.ts.
      const imagePath = safeReadPath(input.image_path);
      const outDir = safeWritePath(
        input.output_dir ?? resolve(CONFIG.outputDir, `ingest-${Date.now()}`)
      );
      mkdirSync(outDir, { recursive: true });
    
      const buf = Buffer.from(readFileSync(imagePath));
      const assetType = input.asset_type;
    
      const transparencyExpected = input.transparent ?? defaultTransparency(assetType);
      const vectorExpected = input.vector ?? defaultVector(assetType);
    
      const warnings: string[] = [];
      let masterPng: Buffer = buf;
    
      // Stage 0 — restoration pre-pass.
      // Users commonly paste JPEGs saved from Midjourney / Ideogram web /
      // Nano Banana. JPEG's 8×8 DCT blocks produce edge ringing + chroma
      // subsampling fringes that wreck matting (visible as halo rings around
      // the subject in the output alpha). We re-encode to lossless PNG and
      // apply a mild unsharp-mask to restore edge contrast before matte.
      // Skip when the file is already a lossless PNG/WebP/TIFF.
      const ext = extname(imagePath).toLowerCase();
      const lossy = [".jpg", ".jpeg", ".heic", ".heif", ".avif"].includes(ext);
      if (lossy) {
        const sharp = await loadSharp();
        if (sharp) {
          try {
            masterPng = await sharp(masterPng)
              .sharpen({ sigma: 0.6 })
              .png({ compressionLevel: 9 })
              .toBuffer();
            warnings.push(
              `restoration pre-pass: re-encoded ${ext} → PNG with mild sharpen to reduce JPEG edge ringing before matte`
            );
          } catch (e) {
            warnings.push(`restoration pre-pass skipped: ${(e as Error).message}`);
          }
        } else {
          warnings.push(
            `restoration pre-pass skipped: sharp not installed; JPEG compression fringes may produce halos around the matte subject`
          );
        }
      }
    
      // Stage 1 — matte.
      // We always run matte when transparency is expected; even if the input
      // already has an alpha channel, the matte pipeline's auto-mode no-ops
      // on already-alpha images and returns coverage stats.
      if (transparencyExpected) {
        const matted = await matte({ image: masterPng, mode: "auto" });
        masterPng = Buffer.from(matted.image);
        warnings.push(...matted.warnings);
      }
    
      // Persist the (possibly mattéd) raster.
      const masterPath = resolve(outDir, "master.png");
      writeFileSync(masterPath, masterPng);
    
      const variants: AssetBundle["variants"] = [
        {
          path: masterPath,
          format: "png",
          rgba: transparencyExpected,
          bytes: masterPng.length
        }
      ];
    
      // Stage 2 — vectorize.
      if (vectorExpected) {
        const paletteSize = vectorPaletteBudget(assetType);
        const maxPaths = vectorPathBudget(assetType);
        const vec = await vectorize({
          image: masterPng,
          palette_size: paletteSize,
          max_paths: maxPaths
        });
        const svgPath = resolve(outDir, "mark.svg");
        writeFileSync(svgPath, vec.svg);
        variants.push({
          path: svgPath,
          format: "svg",
          paths: vec.paths_count,
          bytes: vec.svg.length
        });
        warnings.push(...vec.warnings);
      }
    
      // Stage 3 — tier-0 validation.
      const validation = await tier0({
        image: masterPng,
        asset_type: assetType,
        transparency_required: transparencyExpected,
        ...(input.brand_bundle && { brand_bundle: input.brand_bundle }),
        ...(input.expected_text && { intended_text: input.expected_text })
      });
    
      // Stage 4 — provenance. We don't have a prompt here (the user generated
      // the image externally) so the prompt hash is over the ingest params.
      const ck = computeCacheKey({
        model: "external",
        seed: 0,
        prompt: `external-ingest:${imagePath}`,
        params: {
          asset_type: assetType,
          transparent: transparencyExpected,
          vector: vectorExpected
        }
      });
    
      return {
        mode: "api",
        asset_type: assetType,
        brief: `external:${imagePath}`,
        brand_bundle_hash: hashBundle(input.brand_bundle ?? null),
        variants,
        provenance: {
          model: "external",
          seed: 0,
          prompt_hash: ck.prompt_hash,
          params_hash: ck.params_hash
        },
        validations: validation,
        warnings: [
          `ingested external image from ${imagePath}`,
          `matte: ${transparencyExpected ? "applied" : "skipped"}, vectorize: ${vectorExpected ? "applied" : "skipped"}`,
          ...warnings,
          ...validation.warnings
        ]
      };
    }
  • Zod schema `IngestExternalInput` defining the input parameters: image_path (required), asset_type (required), brand_bundle, expected_text, vector, transparent, output_dir.
    export const IngestExternalInput = z.object({
      image_path: z
        .string()
        .describe(
          "Local filesystem path to an image the user generated in an external tool (Midjourney, Nano Banana, Ideogram web, etc.). The server re-enters the pipeline: matte → vectorize (if requested) → validate → bundle."
        ),
      asset_type: AssetTypeSchema,
      brand_bundle: BrandBundleSchema.optional(),
      expected_text: z
        .string()
        .optional()
        .describe(
          "If the asset should contain a wordmark, pass the intended text for OCR Levenshtein validation."
        ),
      vector: z
        .boolean()
        .optional()
        .describe(
          "If true, run the raster-to-SVG vectorization stage. Defaults to true for logo / favicon / icon_pack, false otherwise."
        ),
      transparent: z
        .boolean()
        .optional()
        .describe(
          "If true, run the matte stage. Defaults to true for logo / app_icon / sticker / transparent_mark / icon_pack."
        ),
      output_dir: z.string().optional()
    });
  • Tool registration with name 'asset_ingest_external', description, and inputSchema (JSON Schema form) inside the tools array.
    {
      name: "asset_ingest_external",
      description:
        "Ingest an image the user generated in an external tool (Midjourney, Nano Banana, Ideogram web, Recraft, Flux Playground, etc.) and run the matte → vectorize (where applicable) → tier-0 validation pipeline. The round-trip endpoint for external_prompt_only mode.",
      inputSchema: {
        type: "object",
        properties: {
          image_path: {
            type: "string",
            description: "Absolute path to the locally-saved image."
          },
          asset_type: {
            type: "string",
            enum: [
              "logo",
              "app_icon",
              "favicon",
              "og_image",
              "splash_screen",
              "illustration",
              "icon_pack",
              "hero",
              "sticker",
              "transparent_mark"
            ]
          },
          brand_bundle: { type: "object" },
          expected_text: { type: "string" },
          vector: { type: "boolean" },
          transparent: { type: "boolean" },
          output_dir: { type: "string" }
        },
        required: ["image_path", "asset_type"]
      },
      annotations: { openWorldHint: false }
    },
  • Route handler case in the CallToolRequestSchema switch: dispatches to ingestExternal() with parsed IngestExternalInput.
    case "asset_ingest_external":
      result = await ingestExternal(IngestExternalInput.parse(args ?? {}));
      break;
  • Helper functions: defaultTransparency, defaultVector, vectorPaletteBudget, vectorPathBudget — provide default settings per AssetType for the pipeline stages.
    function defaultTransparency(t: AssetType): boolean {
      // Source: rules/asset-enhancer-activate.md (transparency defaults by asset type)
      switch (t) {
        case "logo":
        case "app_icon":
        case "sticker":
        case "transparent_mark":
        case "icon_pack":
        case "favicon":
          return true;
        default:
          return false;
      }
    }
    
    function defaultVector(t: AssetType): boolean {
      switch (t) {
        case "logo":
        case "favicon":
        case "icon_pack":
          return true;
        default:
          return false;
      }
    }
    
    function vectorPaletteBudget(t: AssetType): number {
      // Source: docs/research/12-vector-svg-generation/ — fewer colors → cleaner SVG.
      switch (t) {
        case "favicon":
          return 3;
        case "icon_pack":
          return 2;
        case "logo":
          return 6;
        default:
          return 6;
      }
    }
    
    function vectorPathBudget(t: AssetType): number {
      // Source: rules/asset-enhancer-activate.md fact #3 (≤40 paths for a clean mark).
      switch (t) {
        case "favicon":
          return 8;
        case "icon_pack":
          return 12;
        case "logo":
          return 40;
        default:
          return 80;
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description outlines the pipeline steps but lacks details on side effects, permissions, what happens to the input image, or output format. Annotations are minimal, so description should compensate but does not.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, concise and front-loaded, but could better structure the pipeline steps and parameter context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 7 parameters, low schema coverage, no output schema, and a multi-step pipeline, the description is incomplete. It omits parameter semantics and does not explain 'tier-0 validation' or output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is only 14%, and the description adds no parameter explanations beyond the pipeline mention. It does not describe brand_bundle, expected_text, vector, transparent, or output_dir.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool ingests externally generated images and runs a specific pipeline (matte, vectorize, validation), distinguishing it from sibling generation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context by listing external tools and mentioning 'round-trip endpoint for external_prompt_only mode', but does not explicitly state when not to use it or name alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MohamedAbdallah-14/prompt-to-asset'

If you have feedback or need assistance with the MCP directory API, please join our Discord server