Skip to main content
Glama

restore_face

Restore blurry, damaged, or AI-generated faces to natural sharpness with adjustable fidelity. Enhances background and upscales images using CodeFormer.

Instructions

Restore blurry, damaged, or AI-generated faces to sharp, natural quality. Uses CodeFormer (NeurIPS 2022, state-of-the-art FID 32.65 on CelebA-Test). Adjustable fidelity — balance between quality enhancement and identity preservation. Also enhances background and upsamples. Stable endpoint — model upgrades automatically as SOTA evolves. 5 sats per image, pay per request with Bitcoin Lightning — no API key or signup needed. Requires create_payment with toolName='restore_face'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paymentIdYesValid payment ID (must be paid)
imageBase64YesBase64-encoded image containing faces (PNG, JPEG, WEBP) or data URI
fidelityNoFidelity to input: 0.0 = max quality enhancement, 1.0 = max identity preservation (default 0.5)
background_enhanceNoAlso enhance the background (default true)
face_upsampleNoUpsample restored faces (default true)
upscaleNoOutput upscale factor 1-4 (default 2)

Implementation Reference

  • index.js:39-39 (handler)
    The tool 'restore_face' is listed as one of the 33+ AI tools in the TOOLS array, but there is no local implementation — it is a remote tool handled by the server at https://sats4ai.com/api/mcp. The actual tool execution logic is on the remote server.
    "restore_face",
  • index.js:14-45 (registration)
    All tools including 'restore_face' are registered in the TOOLS array of this MCP client configuration package. The package only exports the tool names and config; no handler logic exists locally.
    const TOOLS = [
      "image",
      "video",
      "video_from_image",
      "text",
      "vision",
      "music",
      "tts",
      "transcription",
      "3d",
      "ocr",
      "file_convert",
      "email",
      "sms",
      "call",
      "voice_clone",
      "image_edit",
      "pdf_merge",
      "epub_to_audiobook",
      "convert_html_to_pdf",
      "translate_text",
      "extract_receipt",
      "ai_call",
      "remove_background",
      "upscale_image",
      "restore_face",
      "detect_nsfw",
      "detect_objects",
      "remove_object",
      "colorize_image",
      "deblur_image",
    ];
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behaviors: uses CodeFormer, adjustable fidelity, background enhancement, upsampling, stable endpoint with automatic model upgrades, cost (5 sats), and payment flow. Lacks details on failure modes (e.g., no face detected) or response format, but covers core traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is 5 sentences, each adding distinct information: purpose, model citation, fidelity, enhancements, stability and cost. Front-loaded with main action. Concise but could be slightly tighter without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description does not explain what the tool returns (e.g., base64 image, JSON, or job ID). Given the complexity (6 params, no output schema) and sibling tools like get_job_result, the description should clarify the response format. This omission limits completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 6 parameters have descriptions in the input schema (100% coverage). The description adds marginal value beyond the schema (e.g., reiterating fidelity balance). Baseline 3 is appropriate since the schema already explains each parameter well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Restore blurry, damaged, or AI-generated faces to sharp, natural quality.' It specifically names the underlying model (CodeFormer) and cites performance metrics, making it distinct from general image enhancement tools. No confusion with sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly mentions prerequisite: 'Requires create_payment with toolName="restore_face".' Provides context for when to use adjustable parameters (fidelity, background_enhance, etc.). However, does not directly compare with sibling tools like deblur_image or analyze_image, so some inference is needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cnghockey/sats4ai-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server